Discussion about this post

User's avatar
ilker's avatar

What you wrote is true when considered for low-risk and isolated simple problems, but it will be insufficient for more complex real-world dynamics.

Most of the points you provided are about there being no difference between an almost perfect result and a result that is closer to perfect.

"Diminishing Returns" is a failed idea in scenarios that contain asymmetric risk. The margin of error being a "few notches" higher does not always mean the result we get is acceptable. It is not acceptable for a mission critical problem, such as human lives being in danger, to be "good enough"; compute power and reasoning must continue until the risk gets closest to zero.

Also, collecting new data in high-risk areas is a more costly, inconclusive, and dangerous process compared to reasoning. As long as the data at hand is not completely exhausted, collecting new data is a risky move.

In the development of artificial intelligence systems, it is necessary to push reasoning to the extreme limits, otherwise, with the "good enough" level, they cannot go beyond the current statistical parrot LLMs.

I'm sorry if I fell short, I just wanted to share my ideas.

2 more comments...

No posts

Ready for more?