In my career, I've dug through a number of scene graph renderer internals. Almost always, they claim to be "hardware independent" by abstracting the hardware, and then they go ahead and expose functions like "bindSecondTexture()" and "setAlphaBlendFunc()" to objects, and have objects "render" themselves by calling those functions.
That's not really an abstraction. It exposes how hardware worked circa 1998, and then delegates responsibility for actually presenting the global scene to each object within the system. Predictably, such a system will then end up being terrible if you want to implement global presentation functions, like shadows, deferred rendering etc.
In 2002, I said "an object should present itself to the rendering system as a collection of vertices, indices, material state and transform state, and let the system figure out how to draw it to hardware," and that claim holds true better than ever today. In a "direct" drawing environment like described above, the overall system doesn't know whether an object will turn alpha blending on or not, even if it does the same thing each and every time it gets called upon to render -- it *could* be doing something else the next time! Thus, there is no optimization possible based on global scene knowledge.
Meanwhile, if an object allocates fixed vertex, index and material buffers, and updates its transform state as it moves through the world (rather than in response to a "draw" call), then the act of actually presenting the scene can be optimzied, improved and ported to any kind of rendering system. As a bonus, it actually makes the life of an object much simpler, because it doesn't need to worry about configuring a piece of hardware at all; it just declares what it wants to have done, once, and that's it!
There's lots of material out there that shows how to do it right, and there has been for the last 15 years, ever since SGI decided to take all their learning they had from GL and turn it into OpenGL. You just have to know to go read it, which means two things:
1. You know that it's there.
2. You know that you need to know.
For some reason, in a lot of cases, those steps seem to have been missed at some core level.
So, the following comment I ran across a while back is fairly typical:
// I need to set back and front, because if the asset has reversed culling
// normals, I also want to change the lighting model to reverse normals,
// which will cause it to use the back side material parameters.
The OpenGL graphics API (and most APIs, these days), have two separate concepts: Back- vs front-facing triangles (what the above talks about), and whether clockwise or counter-clockwise winding counts as front. Thus, instead of the machinations that are described above (changing glColorMaterial(), glCullFace() and even scaling normals by -1!) the same operation could be done by simply calling glFrontFace() to select whether clockwise or counter-clockwise faces are considered "front."
But, you know, writing code is so much more rewarding than reading other people's code, right? Hence, writing code always feels like the right thing to do.