If the goal is to reuse code between projects, and adopt a consistent approach to building applications, it is possible to accomplish this by convention rather than spending a lot of time trying to invent a framework.
As a starting point I would highly recommend reading "Clean Code" by Robert C. Martin (Uncle Bob) as a freshening view of how to write better code. My own realization was to compare software development with writing or painting. Many of the best writers out there still start their works with pen (or pencil) and paper, not on the computer. The reason for this is because they are applying a time-honoured technique of drafting down their ideas before reviewing and refining them. Painters don't just crack out the oils and produce a finished portrait. They draft in lines, paint over what they aren't satisfied with, and gradually build a portrait through refinement.
This is where software fails. Too often, a developer will write a block of code to satisfy a requirement, verify that it "works", and then move on to the next requirement. There is little to no refinement involved, either from performance or stability (engineering) or how well the intent behind the requirement is satisfied. Only later, when the product is put through its paces, do issues around performance or behaviour start coming to the surface. But by then the code is draft built upon draft, built upon draft.
Frameworks only compound this failure by trying to restrict options in how to accomplish a task, and significantly increase the investment required to refine any specific scenario. The failure to draft ideas and then re-factor not only the code, but the design & requirement is the paramount failing of a project.
The second step to building better projects is to gain an appreciation and understanding for Object-Oriented software development, S.O.L.I.D. principles, and design patterns. Technologies change all of the time, but the fundamentals don't. Learn to write self contained units of work that have clear boundaries. Develop automated unit tests around these units so that these boundaries are monitored. Getting to this point is 90% of the battle. Each unit of work can now be utilized in any number of projects, allowing a consistent approach to a specific domain problem. If it doesn't suit one given scenario it can be substituted or extended without corrupting the original. There is no framework, no configuration, only simple components representing units of distinct work. Tying these together is basic, fundamental OOP inheritance and composition via interfaces. They should not care how they are persisted, how they communicate with one another, or even what concrete implementation they are communicating with. Too often, these details (persistence, and communication) become the driving force behind building a framework. They start introducing limitations on the desired behaviour of the application, and the instant this happens, the framework is working against you. Clients don't pay developers to tell them what they cannot have, or to come up with ways to make what they want more expensive.
Most frameworks that I've worked on have been designed with a lot of good coding principles and patterns in mind, but also with a significant amount of investment in the intangible. Teams have invested many hours into "framework code" that does not directly meet functional requirements. This forms a type of debt that they expect to get back once the framework is mature enough and features can be quickly bolted in and configured. Unfortunately the vision at the start is incomplete and a lot of extra time is needed to adjust or work around limitations from earlier assumptions in the framework.
The benefit of breaking things down into units of work is that it opens the door to update popular components to new technologies while still allowing them to interoperate with older components. There are challenges as components are upgraded, but these are only in the ties between components. By following good patterns and practices these challenges are easily dealt with. A good example is when dealing with collections of objects. In .Net 1.1 the common approach was using ArrayLists while .Net 2.0 and onward adopted IList<> generics. If we re-factor units of work to take advantage of technologies such as Linq but still want to utilize these components in existing products, we need to manage the translation between ArrayLists and Lists via adapters.
The overall goal is to keep things as simple to implement as possible with as little excess up-front investment as possible. By breaking up applications into shareable units of work, implemented in simple and consistent ways, developers can accomplish their task while remaining flexible to future change. This is something a framework will never give them without first taking its pound of flesh.