About 33% of coding is being able to troubleshoot and solve issues where a complex system does not behave as the documentation claims it should. Either a bug, misconfigurations, version conflicts, mistakes buried deep in dependency trees of third-party packages, transient faults... or simply incorrect documentation.
Just today, I was building a docker-compose file, and was getting EOF errors when an MSSQL image was being pulled from the Microsoft container registry.
On a hunch, I fired up a VPN and changed the region from Europe to US, sure enough, it worked. There seems to be some fault in the container registry that serves my specific region.
AI would just keep insisting the docker-compose should work, or just try randomising various versions of the yaml contents.
Humans have certain attributes that will never be replicated in AI. AI can be trained to respond like a human, but not think outside the box - injecting randomness is not the same.