In my opinion, these issues are all due to a hype primarily built by sci-fi movies and literature. When G. Hinton says that we need to throw it all away and start again, I do not believe that he means that our methods do not work. Yes, they have some limitations, and it's good that we now know this. However, this is how research and science progresses.

Saying that we may never achieve AGI so it doesn't worth the effort, it's like saying that we may never colonize Mars so let's abandon the missions. This is no way to evolve.

No, we do not have to mimic the human brain to achieve something extraordinary. Early flying machines also started by trying to mimic how birds fly and we ended up with airplanes, flying machines that vaguely resemble birds, but work perfectly.

Finally, no, AI does not lose its magic when you are aware of the underlying machinery. We have the tendency to play down our goals when we have finally reached the peak, but I'm still in awe about how simple linear regression works.

I'm saying all that with complete respect to the author's opinion. Not everyone is moved by the same things and I appreciate people that have strong opinions and the courage to express them.

Machine Learning Engineer. I talk about AI, MLOps, and Python programming. More about me: