Actually a great example, the idea of mega textures is a nice concept tech wise. But it's beyond retarded from a workflow point of view. You want to reuse as many assets in generating the world not to custom paint every corner of it.
I think Megatextures are one of the most misunderstood pieces of technology - probably the name has to do something with it - as your misunderstanding is very common.
You do not paint or create the megatexture by hand, it is automatically created via the map compilation tools. The map itself is still generated using repeated textures, reusable meshes and even brushes and such like all id's previous engines going back to Quake 1's editing tools (the Rage editor itself is based on Radiant, though instead of being a standalone tool it is part of the "id tech sdk" or whatever it was called). The megatexture is then generated by baking *everything* into a single gigantic texture, it is essentially like lightmapping on steroids, except while lightmaps only store lighting information, the megatexture stores a composition of whatever texture(s) used at each point in the surface together with any decals/splats, small geometric details (small objects are baked into the texture itself) and after full lighting with shadows has been calculated (they used a computer cluster dedicated to calculating the megatexture). IIRC in addition to the visual side, the megatexture also bakes other surface information, like what sounds would play when a surface is hit, etc (but probably these are stored on lower resolutions).
As far as workflow is concerned, in fact optimizing the artist workflow was the main motivation behind megatextures - Carmack didn't want artists to waste time worrying about how to optimize their assets (especially for consoles - megatextures were actually made with consoles in mind, at least in their Rage implementation) and instead wanted them to focus on doing what artists are supposed to do: make the art for the game, but without worrying about any sort of asset optimization. The "optimization" would be handled by the engine itself (the megatexture generator) and indeed that is how it worked (at least according to some second hand i have - an old coworker of mine knew an artist who worked on an id tech 5 game and told him that one of the things he liked about the engine was that they could throw whatever unoptimized mess at the engine and it'll just work without slowing things down).
The reason this technology "failed" is because consoles couldn't handle it and nobody wants to fucking admit it.
Actually megatextures didn't fail, quite the opposite. Carmack was trying for years to convince the GPU manufacturers to introduce some form of virtual texturing (in a similar sense to how CPUs can do virtual memory) without success so with Rage he decided to do it in the CPU. His implementation was mainly designed with XBox 360 in mind (where actually it performed best) which has unified memory accessible by both the CPU and GPU at the same time which allowed him to read data from the GPU and feed it with megatexture pages (as in "virtual memory pages") without performing any sort of copying (this need for copying was why on PC you'd often see popping - since the GPU has its own dedicated memory it, the engine needs to copy data back and forth and to avoid synchronizing the CPU with the GPU - which would mean that one would do nothing while the other worked - this copying was done over several frames and depending on how much memory had and how fast communicating with the GPU was, the number of frames could vary a lot - and this was also a reason why AMD GPUs had issues at the beginning since they misreported the available memory size to the engine).
After Rage was released and proved the usefulness of virtual texturing (despite issues on the PC, the game was still one of the best looking games on consoles - and runned at 60fps!), GPU manufacturers were convinced and added hardware support for them. These are known as sparse textures on OpenGL and Vulkan and as tiled resources on Direct3D. After Rage several other engines used them, though AFAIK none used them for full precalculation like Rage did (e.g. the Far Cry games use virtual texturing for its terrain by generating the pages on the fly instead of storing them on disk - this is in a way reminiscent of the surface cache used in the original Quake 1 software renderer on which the idea of megatextures can most likely be traced back).
Doom 2016/id Tech 6 nowdays still use megatextures and virtual texturing, though they have tweaked the paging algorithm a bit (i do not think they use any of the hardware accelerated methods). Also while they still bake a lot of information to the megatexture, the lighting itself has mostly switched to some hybrid of forward+ rendering with precalculated indirect lighting. This is really just a natural evolution of the ideas introduced on Rage/id Tech 5 with the capabilities of modern hardware.
Interestingly Carmack wanted for id Tech 6 to pursue the idea of "megageometry" by extending the megatexture idea to the geometry itself and instead of rendering polygons (at least for the static geometry) to render voxels generated by a similar process (the voxel set would be generated from the polygonal meshes, again with lighting information baked in - though not necessarily fully baked). They had shown some results of their work but i'm not sure if anything went public, at least not outside some QuakeCon talk by Carmack who mentioned that they managed to compress their voxel data to essentially "1.5 bits" per voxel. But this didn't pan out as GPUs at the time weren't that good at rendering anything outside of triangles (perhaps it could work nowadays with RTX though, although that again is all about triangles).
Personally i always found the voxel idea interesting but the storage requirements would be insane (even Rage that only baked texture data was huge, voxel data would most likely be several times bigger). Perhaps a better idea would be to only precalculate a very low resolution voxel set and generate the higher resolution voxel pages on the fly, similarly to how Far Cry generates its terrain. This may become a more viable approach as PCs (and consoles...) get more cores since voxelization is an inherently parallelizable task and you could dedicate a bunch of cores just doing that. If only i wasn't too lazy and wasted all my free time playing games from the late 90s and early 2000s... :-P.