Does following SOLID lead to writing a framework on top of the tech stack?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite












I like SOLID, and I try my best to use and apply it when I'm developing. But I can't help but feel as though the SOLID approach turns your code into 'framework' code - ie code you would design if you were creating a framework or library for other developers to use.



I've generally practiced 2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming).



If the user really just wants an application to do x and y things, does it make sense to follow SOLID and add in a whole bunch of entry points of abstraction, when you don't even know if that is even a valid problem to begin with? If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?



This is something that seems common in the Java Enterprise world, where it feels as though you're designing your own framework on top of J2EE or Spring so that it's a better UX for the developer, instead of focusing on UX for the user?










share|improve this question



























    up vote
    3
    down vote

    favorite












    I like SOLID, and I try my best to use and apply it when I'm developing. But I can't help but feel as though the SOLID approach turns your code into 'framework' code - ie code you would design if you were creating a framework or library for other developers to use.



    I've generally practiced 2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming).



    If the user really just wants an application to do x and y things, does it make sense to follow SOLID and add in a whole bunch of entry points of abstraction, when you don't even know if that is even a valid problem to begin with? If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?



    This is something that seems common in the Java Enterprise world, where it feels as though you're designing your own framework on top of J2EE or Spring so that it's a better UX for the developer, instead of focusing on UX for the user?










    share|improve this question























      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      I like SOLID, and I try my best to use and apply it when I'm developing. But I can't help but feel as though the SOLID approach turns your code into 'framework' code - ie code you would design if you were creating a framework or library for other developers to use.



      I've generally practiced 2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming).



      If the user really just wants an application to do x and y things, does it make sense to follow SOLID and add in a whole bunch of entry points of abstraction, when you don't even know if that is even a valid problem to begin with? If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?



      This is something that seems common in the Java Enterprise world, where it feels as though you're designing your own framework on top of J2EE or Spring so that it's a better UX for the developer, instead of focusing on UX for the user?










      share|improve this question













      I like SOLID, and I try my best to use and apply it when I'm developing. But I can't help but feel as though the SOLID approach turns your code into 'framework' code - ie code you would design if you were creating a framework or library for other developers to use.



      I've generally practiced 2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming).



      If the user really just wants an application to do x and y things, does it make sense to follow SOLID and add in a whole bunch of entry points of abstraction, when you don't even know if that is even a valid problem to begin with? If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?



      This is something that seems common in the Java Enterprise world, where it feels as though you're designing your own framework on top of J2EE or Spring so that it's a better UX for the developer, instead of focusing on UX for the user?







      frameworks solid






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 2 hours ago









      Igneous01

      9342712




      9342712




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          5
          down vote













          Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them in a cargo-cult manner, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary.



          This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not




          provide the flexibility other developers may need




          instead, provide the flexibility other developers actually need as soon as they need it, but not earlier.



          So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to SOLID when you have an actual case for reusage. And don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there.



          Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more.



          This is actually old wisdom. Long ago before the term SOLID got popular, someone told me we before we should try to write reusable code, we should write usable code. And I still think this is a good recommendation.






          share|improve this answer



























            up vote
            2
            down vote













            From my experience, when writing an app, you have three choices:



            1. Write code solely to fulfil the requirements,

            2. Write generic code that anticipates future requirements, as well as fulfilling the current requirements,

            3. Write code that only fulfils the current requirements, but in a way that's easy to change later to meet other needs.

            In the first case, it's common to end up with tightly coupled code that lacks unit tests. Sure it's quick to write, but it's hard to test. And it's a right royal pain to change later when the requirements change.



            In the second case, huge amounts of time is spent trying to anticipate future needs. And all too often those anticipated future requirements never materialise. This seems the scenario that you are describing. It's a waste of effort most of the time and results in unnecessarily complex code that's still hard to change when a requirement that wasn't anticipated for turns up.



            The last case is the one to aim for in my view. Use TDD or similar techniques to test the code as you go and you'll end up with loosely coupled code, that's easy to modify yet still quick to write. And the thing is, by doing this, you naturally follow many of the SOLID principles: small classes and functions; interfaces and injected dependencies. And Mrs Liskov is generally kept happy too as simple classes with single responsibilities rarely fall foul of her substitution principle.



            The only aspect of SOLID that doesn't really apply here is the open/closed principle. For libraries and frameworks, this is important. For a self-contained app, not so much. Really it's a case of writing code that follows "SLID": easy to write (and read), easy to test and easy to maintain.






            share|improve this answer



























              up vote
              0
              down vote













              The perspective you have can be skewed by personal experience. This is a slippery slope of facts that are individually correct, but the resulting inference isn't, even though it looks like correct at first glance.



              • Frameworks are larger in scope than small projects.

              • Bad practice is significantly harder to deal with in larger codebases.

              • Building a framework (on average) requires a more skilled developer than building a small project.

              • Better developers follow good practice (SOLID) more.

              • As a result, frameworks have a higher need for good practice and tend to be built by developers who are more closely experienced with good practice.

              This means that when you interact with frameworks and smaller libraries, the good practice code you'll interact with will more commonly be found in the bigger frameworks.



              This fallacy is very common, e.g every doctor I've been treated by was arrogant. Therefore I conclude that all doctors are arrogant. These fallacies always suffer from making a blanket inference based on personal experiences.



              In your case, it's possible that you've predominantly experienced good practice in larger frameworks and not in smaller libraries. Your personal observation isn't wrong, but it's anecdotal evidence and not universally applicable.





              2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming)




              You're somewhat confirming this here. Think of what a framework is. It is not an application. It's a generalized "template" that others can use to make all sorts of application. Logically, that means that a framework is built in much more abstracted logic in order to be useable by everyone.



              Framework builders are incapable of taking shortcuts, because they don't even know what the requirements of the subsequent applications are. Building a framework inherently incentivizes them to make their code usable for others.



              Application builders, however, have the ability to compromise on logical efficiency because they are focused on delivering a product. Their main goal is not the workings of the code but rather the experience of the user.



              For a framework, the end user is another developer, who will be interacting with your code. The quality of your code matters to your end user.

              For an application, the end user is a non-developer, who won't be interacting with your code. The quality of your code is of no importance to them.





              If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?




              This is an interesting point, and it's (in my experience) the main reason why people still try to justify avoiding good practice.



              To summarize the below points: Skipping good practice can only be justified if your requirements (as currently known) are immutable, and there will never be an change/addition to the codebase.

              For example, when I write a 5 minute console application to process a particular file, I don't use good practice. Because I'm only going to use the application today, and it doesn't need to be updated in the future (it'd be easier to write a different application should I need one again).



              Let's say you can shoddily build an application in 4 weeks, and you can properly build it in 6 weeks. At first sight, shoddily building it seems better. The customer gets their application quicker, and the company has to spend less time on developer wages. Win/win, right?



              However, this is a decision made without thinking ahead. Because of the quality of the codebase, making a major change to the shoddily built one will take 2 weeks, while making the same changes to the properly built one takes 1 week. There may be many of these changes coming up in the future.



              Furthermore, there is a tendency for changes to unexpectedly require more work than you initially thought in shoddily built codebases, thus likely pushing your development time to 3 weeks instead of two.



              And then there's also the tendency to waste time hunting for bugs. This is often the case in projects where logging has been ignored due to time constraints or sheer unwillingness to implement it because you absentmindedly work under the assumption that the end product will work as expected.



              It doesn't even need to be a major update. At my current employer, I've seen several projects that were built quick qnd dirty, and when the tiniest bug/change needed to be made due to a miscommunication in the requirements, that lead to a chain reaction of needing to refactor module after module. Some of these projects ended up collapsing (and leaving behind an unmaintainable mess) before they even released their first version.



              Shortcut decisions (suick and dirty programming) are only beneficial if you can conclusively guarantee that the requirements are exactly correct and will never need to change. In my experience, I've never come across a project where that is true.



              Investing the extra time in good practice is investing in the future. Future bugs and changes will be so much easier when the existing codebase is built on good practice. It will already be paying dividends after only two or three changes are made.






              share|improve this answer






















                Your Answer







                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "131"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                convertImagesToLinks: false,
                noModals: false,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );













                 

                draft saved


                draft discarded


















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f379095%2fdoes-following-solid-lead-to-writing-a-framework-on-top-of-the-tech-stack%23new-answer', 'question_page');

                );

                Post as a guest






























                3 Answers
                3






                active

                oldest

                votes








                3 Answers
                3






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes








                up vote
                5
                down vote













                Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them in a cargo-cult manner, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary.



                This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not




                provide the flexibility other developers may need




                instead, provide the flexibility other developers actually need as soon as they need it, but not earlier.



                So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to SOLID when you have an actual case for reusage. And don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there.



                Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more.



                This is actually old wisdom. Long ago before the term SOLID got popular, someone told me we before we should try to write reusable code, we should write usable code. And I still think this is a good recommendation.






                share|improve this answer
























                  up vote
                  5
                  down vote













                  Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them in a cargo-cult manner, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary.



                  This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not




                  provide the flexibility other developers may need




                  instead, provide the flexibility other developers actually need as soon as they need it, but not earlier.



                  So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to SOLID when you have an actual case for reusage. And don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there.



                  Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more.



                  This is actually old wisdom. Long ago before the term SOLID got popular, someone told me we before we should try to write reusable code, we should write usable code. And I still think this is a good recommendation.






                  share|improve this answer






















                    up vote
                    5
                    down vote










                    up vote
                    5
                    down vote









                    Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them in a cargo-cult manner, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary.



                    This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not




                    provide the flexibility other developers may need




                    instead, provide the flexibility other developers actually need as soon as they need it, but not earlier.



                    So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to SOLID when you have an actual case for reusage. And don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there.



                    Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more.



                    This is actually old wisdom. Long ago before the term SOLID got popular, someone told me we before we should try to write reusable code, we should write usable code. And I still think this is a good recommendation.






                    share|improve this answer












                    Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them in a cargo-cult manner, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary.



                    This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not




                    provide the flexibility other developers may need




                    instead, provide the flexibility other developers actually need as soon as they need it, but not earlier.



                    So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to SOLID when you have an actual case for reusage. And don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there.



                    Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more.



                    This is actually old wisdom. Long ago before the term SOLID got popular, someone told me we before we should try to write reusable code, we should write usable code. And I still think this is a good recommendation.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 1 hour ago









                    Doc Brown

                    125k21228364




                    125k21228364






















                        up vote
                        2
                        down vote













                        From my experience, when writing an app, you have three choices:



                        1. Write code solely to fulfil the requirements,

                        2. Write generic code that anticipates future requirements, as well as fulfilling the current requirements,

                        3. Write code that only fulfils the current requirements, but in a way that's easy to change later to meet other needs.

                        In the first case, it's common to end up with tightly coupled code that lacks unit tests. Sure it's quick to write, but it's hard to test. And it's a right royal pain to change later when the requirements change.



                        In the second case, huge amounts of time is spent trying to anticipate future needs. And all too often those anticipated future requirements never materialise. This seems the scenario that you are describing. It's a waste of effort most of the time and results in unnecessarily complex code that's still hard to change when a requirement that wasn't anticipated for turns up.



                        The last case is the one to aim for in my view. Use TDD or similar techniques to test the code as you go and you'll end up with loosely coupled code, that's easy to modify yet still quick to write. And the thing is, by doing this, you naturally follow many of the SOLID principles: small classes and functions; interfaces and injected dependencies. And Mrs Liskov is generally kept happy too as simple classes with single responsibilities rarely fall foul of her substitution principle.



                        The only aspect of SOLID that doesn't really apply here is the open/closed principle. For libraries and frameworks, this is important. For a self-contained app, not so much. Really it's a case of writing code that follows "SLID": easy to write (and read), easy to test and easy to maintain.






                        share|improve this answer
























                          up vote
                          2
                          down vote













                          From my experience, when writing an app, you have three choices:



                          1. Write code solely to fulfil the requirements,

                          2. Write generic code that anticipates future requirements, as well as fulfilling the current requirements,

                          3. Write code that only fulfils the current requirements, but in a way that's easy to change later to meet other needs.

                          In the first case, it's common to end up with tightly coupled code that lacks unit tests. Sure it's quick to write, but it's hard to test. And it's a right royal pain to change later when the requirements change.



                          In the second case, huge amounts of time is spent trying to anticipate future needs. And all too often those anticipated future requirements never materialise. This seems the scenario that you are describing. It's a waste of effort most of the time and results in unnecessarily complex code that's still hard to change when a requirement that wasn't anticipated for turns up.



                          The last case is the one to aim for in my view. Use TDD or similar techniques to test the code as you go and you'll end up with loosely coupled code, that's easy to modify yet still quick to write. And the thing is, by doing this, you naturally follow many of the SOLID principles: small classes and functions; interfaces and injected dependencies. And Mrs Liskov is generally kept happy too as simple classes with single responsibilities rarely fall foul of her substitution principle.



                          The only aspect of SOLID that doesn't really apply here is the open/closed principle. For libraries and frameworks, this is important. For a self-contained app, not so much. Really it's a case of writing code that follows "SLID": easy to write (and read), easy to test and easy to maintain.






                          share|improve this answer






















                            up vote
                            2
                            down vote










                            up vote
                            2
                            down vote









                            From my experience, when writing an app, you have three choices:



                            1. Write code solely to fulfil the requirements,

                            2. Write generic code that anticipates future requirements, as well as fulfilling the current requirements,

                            3. Write code that only fulfils the current requirements, but in a way that's easy to change later to meet other needs.

                            In the first case, it's common to end up with tightly coupled code that lacks unit tests. Sure it's quick to write, but it's hard to test. And it's a right royal pain to change later when the requirements change.



                            In the second case, huge amounts of time is spent trying to anticipate future needs. And all too often those anticipated future requirements never materialise. This seems the scenario that you are describing. It's a waste of effort most of the time and results in unnecessarily complex code that's still hard to change when a requirement that wasn't anticipated for turns up.



                            The last case is the one to aim for in my view. Use TDD or similar techniques to test the code as you go and you'll end up with loosely coupled code, that's easy to modify yet still quick to write. And the thing is, by doing this, you naturally follow many of the SOLID principles: small classes and functions; interfaces and injected dependencies. And Mrs Liskov is generally kept happy too as simple classes with single responsibilities rarely fall foul of her substitution principle.



                            The only aspect of SOLID that doesn't really apply here is the open/closed principle. For libraries and frameworks, this is important. For a self-contained app, not so much. Really it's a case of writing code that follows "SLID": easy to write (and read), easy to test and easy to maintain.






                            share|improve this answer












                            From my experience, when writing an app, you have three choices:



                            1. Write code solely to fulfil the requirements,

                            2. Write generic code that anticipates future requirements, as well as fulfilling the current requirements,

                            3. Write code that only fulfils the current requirements, but in a way that's easy to change later to meet other needs.

                            In the first case, it's common to end up with tightly coupled code that lacks unit tests. Sure it's quick to write, but it's hard to test. And it's a right royal pain to change later when the requirements change.



                            In the second case, huge amounts of time is spent trying to anticipate future needs. And all too often those anticipated future requirements never materialise. This seems the scenario that you are describing. It's a waste of effort most of the time and results in unnecessarily complex code that's still hard to change when a requirement that wasn't anticipated for turns up.



                            The last case is the one to aim for in my view. Use TDD or similar techniques to test the code as you go and you'll end up with loosely coupled code, that's easy to modify yet still quick to write. And the thing is, by doing this, you naturally follow many of the SOLID principles: small classes and functions; interfaces and injected dependencies. And Mrs Liskov is generally kept happy too as simple classes with single responsibilities rarely fall foul of her substitution principle.



                            The only aspect of SOLID that doesn't really apply here is the open/closed principle. For libraries and frameworks, this is important. For a self-contained app, not so much. Really it's a case of writing code that follows "SLID": easy to write (and read), easy to test and easy to maintain.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 27 mins ago









                            David Arno

                            24.7k64682




                            24.7k64682




















                                up vote
                                0
                                down vote













                                The perspective you have can be skewed by personal experience. This is a slippery slope of facts that are individually correct, but the resulting inference isn't, even though it looks like correct at first glance.



                                • Frameworks are larger in scope than small projects.

                                • Bad practice is significantly harder to deal with in larger codebases.

                                • Building a framework (on average) requires a more skilled developer than building a small project.

                                • Better developers follow good practice (SOLID) more.

                                • As a result, frameworks have a higher need for good practice and tend to be built by developers who are more closely experienced with good practice.

                                This means that when you interact with frameworks and smaller libraries, the good practice code you'll interact with will more commonly be found in the bigger frameworks.



                                This fallacy is very common, e.g every doctor I've been treated by was arrogant. Therefore I conclude that all doctors are arrogant. These fallacies always suffer from making a blanket inference based on personal experiences.



                                In your case, it's possible that you've predominantly experienced good practice in larger frameworks and not in smaller libraries. Your personal observation isn't wrong, but it's anecdotal evidence and not universally applicable.





                                2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming)




                                You're somewhat confirming this here. Think of what a framework is. It is not an application. It's a generalized "template" that others can use to make all sorts of application. Logically, that means that a framework is built in much more abstracted logic in order to be useable by everyone.



                                Framework builders are incapable of taking shortcuts, because they don't even know what the requirements of the subsequent applications are. Building a framework inherently incentivizes them to make their code usable for others.



                                Application builders, however, have the ability to compromise on logical efficiency because they are focused on delivering a product. Their main goal is not the workings of the code but rather the experience of the user.



                                For a framework, the end user is another developer, who will be interacting with your code. The quality of your code matters to your end user.

                                For an application, the end user is a non-developer, who won't be interacting with your code. The quality of your code is of no importance to them.





                                If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?




                                This is an interesting point, and it's (in my experience) the main reason why people still try to justify avoiding good practice.



                                To summarize the below points: Skipping good practice can only be justified if your requirements (as currently known) are immutable, and there will never be an change/addition to the codebase.

                                For example, when I write a 5 minute console application to process a particular file, I don't use good practice. Because I'm only going to use the application today, and it doesn't need to be updated in the future (it'd be easier to write a different application should I need one again).



                                Let's say you can shoddily build an application in 4 weeks, and you can properly build it in 6 weeks. At first sight, shoddily building it seems better. The customer gets their application quicker, and the company has to spend less time on developer wages. Win/win, right?



                                However, this is a decision made without thinking ahead. Because of the quality of the codebase, making a major change to the shoddily built one will take 2 weeks, while making the same changes to the properly built one takes 1 week. There may be many of these changes coming up in the future.



                                Furthermore, there is a tendency for changes to unexpectedly require more work than you initially thought in shoddily built codebases, thus likely pushing your development time to 3 weeks instead of two.



                                And then there's also the tendency to waste time hunting for bugs. This is often the case in projects where logging has been ignored due to time constraints or sheer unwillingness to implement it because you absentmindedly work under the assumption that the end product will work as expected.



                                It doesn't even need to be a major update. At my current employer, I've seen several projects that were built quick qnd dirty, and when the tiniest bug/change needed to be made due to a miscommunication in the requirements, that lead to a chain reaction of needing to refactor module after module. Some of these projects ended up collapsing (and leaving behind an unmaintainable mess) before they even released their first version.



                                Shortcut decisions (suick and dirty programming) are only beneficial if you can conclusively guarantee that the requirements are exactly correct and will never need to change. In my experience, I've never come across a project where that is true.



                                Investing the extra time in good practice is investing in the future. Future bugs and changes will be so much easier when the existing codebase is built on good practice. It will already be paying dividends after only two or three changes are made.






                                share|improve this answer


























                                  up vote
                                  0
                                  down vote













                                  The perspective you have can be skewed by personal experience. This is a slippery slope of facts that are individually correct, but the resulting inference isn't, even though it looks like correct at first glance.



                                  • Frameworks are larger in scope than small projects.

                                  • Bad practice is significantly harder to deal with in larger codebases.

                                  • Building a framework (on average) requires a more skilled developer than building a small project.

                                  • Better developers follow good practice (SOLID) more.

                                  • As a result, frameworks have a higher need for good practice and tend to be built by developers who are more closely experienced with good practice.

                                  This means that when you interact with frameworks and smaller libraries, the good practice code you'll interact with will more commonly be found in the bigger frameworks.



                                  This fallacy is very common, e.g every doctor I've been treated by was arrogant. Therefore I conclude that all doctors are arrogant. These fallacies always suffer from making a blanket inference based on personal experiences.



                                  In your case, it's possible that you've predominantly experienced good practice in larger frameworks and not in smaller libraries. Your personal observation isn't wrong, but it's anecdotal evidence and not universally applicable.





                                  2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming)




                                  You're somewhat confirming this here. Think of what a framework is. It is not an application. It's a generalized "template" that others can use to make all sorts of application. Logically, that means that a framework is built in much more abstracted logic in order to be useable by everyone.



                                  Framework builders are incapable of taking shortcuts, because they don't even know what the requirements of the subsequent applications are. Building a framework inherently incentivizes them to make their code usable for others.



                                  Application builders, however, have the ability to compromise on logical efficiency because they are focused on delivering a product. Their main goal is not the workings of the code but rather the experience of the user.



                                  For a framework, the end user is another developer, who will be interacting with your code. The quality of your code matters to your end user.

                                  For an application, the end user is a non-developer, who won't be interacting with your code. The quality of your code is of no importance to them.





                                  If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?




                                  This is an interesting point, and it's (in my experience) the main reason why people still try to justify avoiding good practice.



                                  To summarize the below points: Skipping good practice can only be justified if your requirements (as currently known) are immutable, and there will never be an change/addition to the codebase.

                                  For example, when I write a 5 minute console application to process a particular file, I don't use good practice. Because I'm only going to use the application today, and it doesn't need to be updated in the future (it'd be easier to write a different application should I need one again).



                                  Let's say you can shoddily build an application in 4 weeks, and you can properly build it in 6 weeks. At first sight, shoddily building it seems better. The customer gets their application quicker, and the company has to spend less time on developer wages. Win/win, right?



                                  However, this is a decision made without thinking ahead. Because of the quality of the codebase, making a major change to the shoddily built one will take 2 weeks, while making the same changes to the properly built one takes 1 week. There may be many of these changes coming up in the future.



                                  Furthermore, there is a tendency for changes to unexpectedly require more work than you initially thought in shoddily built codebases, thus likely pushing your development time to 3 weeks instead of two.



                                  And then there's also the tendency to waste time hunting for bugs. This is often the case in projects where logging has been ignored due to time constraints or sheer unwillingness to implement it because you absentmindedly work under the assumption that the end product will work as expected.



                                  It doesn't even need to be a major update. At my current employer, I've seen several projects that were built quick qnd dirty, and when the tiniest bug/change needed to be made due to a miscommunication in the requirements, that lead to a chain reaction of needing to refactor module after module. Some of these projects ended up collapsing (and leaving behind an unmaintainable mess) before they even released their first version.



                                  Shortcut decisions (suick and dirty programming) are only beneficial if you can conclusively guarantee that the requirements are exactly correct and will never need to change. In my experience, I've never come across a project where that is true.



                                  Investing the extra time in good practice is investing in the future. Future bugs and changes will be so much easier when the existing codebase is built on good practice. It will already be paying dividends after only two or three changes are made.






                                  share|improve this answer
























                                    up vote
                                    0
                                    down vote










                                    up vote
                                    0
                                    down vote









                                    The perspective you have can be skewed by personal experience. This is a slippery slope of facts that are individually correct, but the resulting inference isn't, even though it looks like correct at first glance.



                                    • Frameworks are larger in scope than small projects.

                                    • Bad practice is significantly harder to deal with in larger codebases.

                                    • Building a framework (on average) requires a more skilled developer than building a small project.

                                    • Better developers follow good practice (SOLID) more.

                                    • As a result, frameworks have a higher need for good practice and tend to be built by developers who are more closely experienced with good practice.

                                    This means that when you interact with frameworks and smaller libraries, the good practice code you'll interact with will more commonly be found in the bigger frameworks.



                                    This fallacy is very common, e.g every doctor I've been treated by was arrogant. Therefore I conclude that all doctors are arrogant. These fallacies always suffer from making a blanket inference based on personal experiences.



                                    In your case, it's possible that you've predominantly experienced good practice in larger frameworks and not in smaller libraries. Your personal observation isn't wrong, but it's anecdotal evidence and not universally applicable.





                                    2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming)




                                    You're somewhat confirming this here. Think of what a framework is. It is not an application. It's a generalized "template" that others can use to make all sorts of application. Logically, that means that a framework is built in much more abstracted logic in order to be useable by everyone.



                                    Framework builders are incapable of taking shortcuts, because they don't even know what the requirements of the subsequent applications are. Building a framework inherently incentivizes them to make their code usable for others.



                                    Application builders, however, have the ability to compromise on logical efficiency because they are focused on delivering a product. Their main goal is not the workings of the code but rather the experience of the user.



                                    For a framework, the end user is another developer, who will be interacting with your code. The quality of your code matters to your end user.

                                    For an application, the end user is a non-developer, who won't be interacting with your code. The quality of your code is of no importance to them.





                                    If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?




                                    This is an interesting point, and it's (in my experience) the main reason why people still try to justify avoiding good practice.



                                    To summarize the below points: Skipping good practice can only be justified if your requirements (as currently known) are immutable, and there will never be an change/addition to the codebase.

                                    For example, when I write a 5 minute console application to process a particular file, I don't use good practice. Because I'm only going to use the application today, and it doesn't need to be updated in the future (it'd be easier to write a different application should I need one again).



                                    Let's say you can shoddily build an application in 4 weeks, and you can properly build it in 6 weeks. At first sight, shoddily building it seems better. The customer gets their application quicker, and the company has to spend less time on developer wages. Win/win, right?



                                    However, this is a decision made without thinking ahead. Because of the quality of the codebase, making a major change to the shoddily built one will take 2 weeks, while making the same changes to the properly built one takes 1 week. There may be many of these changes coming up in the future.



                                    Furthermore, there is a tendency for changes to unexpectedly require more work than you initially thought in shoddily built codebases, thus likely pushing your development time to 3 weeks instead of two.



                                    And then there's also the tendency to waste time hunting for bugs. This is often the case in projects where logging has been ignored due to time constraints or sheer unwillingness to implement it because you absentmindedly work under the assumption that the end product will work as expected.



                                    It doesn't even need to be a major update. At my current employer, I've seen several projects that were built quick qnd dirty, and when the tiniest bug/change needed to be made due to a miscommunication in the requirements, that lead to a chain reaction of needing to refactor module after module. Some of these projects ended up collapsing (and leaving behind an unmaintainable mess) before they even released their first version.



                                    Shortcut decisions (suick and dirty programming) are only beneficial if you can conclusively guarantee that the requirements are exactly correct and will never need to change. In my experience, I've never come across a project where that is true.



                                    Investing the extra time in good practice is investing in the future. Future bugs and changes will be so much easier when the existing codebase is built on good practice. It will already be paying dividends after only two or three changes are made.






                                    share|improve this answer














                                    The perspective you have can be skewed by personal experience. This is a slippery slope of facts that are individually correct, but the resulting inference isn't, even though it looks like correct at first glance.



                                    • Frameworks are larger in scope than small projects.

                                    • Bad practice is significantly harder to deal with in larger codebases.

                                    • Building a framework (on average) requires a more skilled developer than building a small project.

                                    • Better developers follow good practice (SOLID) more.

                                    • As a result, frameworks have a higher need for good practice and tend to be built by developers who are more closely experienced with good practice.

                                    This means that when you interact with frameworks and smaller libraries, the good practice code you'll interact with will more commonly be found in the bigger frameworks.



                                    This fallacy is very common, e.g every doctor I've been treated by was arrogant. Therefore I conclude that all doctors are arrogant. These fallacies always suffer from making a blanket inference based on personal experiences.



                                    In your case, it's possible that you've predominantly experienced good practice in larger frameworks and not in smaller libraries. Your personal observation isn't wrong, but it's anecdotal evidence and not universally applicable.





                                    2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming)




                                    You're somewhat confirming this here. Think of what a framework is. It is not an application. It's a generalized "template" that others can use to make all sorts of application. Logically, that means that a framework is built in much more abstracted logic in order to be useable by everyone.



                                    Framework builders are incapable of taking shortcuts, because they don't even know what the requirements of the subsequent applications are. Building a framework inherently incentivizes them to make their code usable for others.



                                    Application builders, however, have the ability to compromise on logical efficiency because they are focused on delivering a product. Their main goal is not the workings of the code but rather the experience of the user.



                                    For a framework, the end user is another developer, who will be interacting with your code. The quality of your code matters to your end user.

                                    For an application, the end user is a non-developer, who won't be interacting with your code. The quality of your code is of no importance to them.





                                    If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer?




                                    This is an interesting point, and it's (in my experience) the main reason why people still try to justify avoiding good practice.



                                    To summarize the below points: Skipping good practice can only be justified if your requirements (as currently known) are immutable, and there will never be an change/addition to the codebase.

                                    For example, when I write a 5 minute console application to process a particular file, I don't use good practice. Because I'm only going to use the application today, and it doesn't need to be updated in the future (it'd be easier to write a different application should I need one again).



                                    Let's say you can shoddily build an application in 4 weeks, and you can properly build it in 6 weeks. At first sight, shoddily building it seems better. The customer gets their application quicker, and the company has to spend less time on developer wages. Win/win, right?



                                    However, this is a decision made without thinking ahead. Because of the quality of the codebase, making a major change to the shoddily built one will take 2 weeks, while making the same changes to the properly built one takes 1 week. There may be many of these changes coming up in the future.



                                    Furthermore, there is a tendency for changes to unexpectedly require more work than you initially thought in shoddily built codebases, thus likely pushing your development time to 3 weeks instead of two.



                                    And then there's also the tendency to waste time hunting for bugs. This is often the case in projects where logging has been ignored due to time constraints or sheer unwillingness to implement it because you absentmindedly work under the assumption that the end product will work as expected.



                                    It doesn't even need to be a major update. At my current employer, I've seen several projects that were built quick qnd dirty, and when the tiniest bug/change needed to be made due to a miscommunication in the requirements, that lead to a chain reaction of needing to refactor module after module. Some of these projects ended up collapsing (and leaving behind an unmaintainable mess) before they even released their first version.



                                    Shortcut decisions (suick and dirty programming) are only beneficial if you can conclusively guarantee that the requirements are exactly correct and will never need to change. In my experience, I've never come across a project where that is true.



                                    Investing the extra time in good practice is investing in the future. Future bugs and changes will be so much easier when the existing codebase is built on good practice. It will already be paying dividends after only two or three changes are made.







                                    share|improve this answer














                                    share|improve this answer



                                    share|improve this answer








                                    edited 1 hour ago

























                                    answered 1 hour ago









                                    Flater

                                    2,899413




                                    2,899413



























                                         

                                        draft saved


                                        draft discarded















































                                         


                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f379095%2fdoes-following-solid-lead-to-writing-a-framework-on-top-of-the-tech-stack%23new-answer', 'question_page');

                                        );

                                        Post as a guest













































































                                        Comments

                                        Popular posts from this blog

                                        What does second last employer means? [closed]

                                        List of Gilmore Girls characters

                                        Confectionery