Let's say an AI entity, sentient or not, becomes better than humans at calculating, predicting, logical reasoning, analyzing, strategizing, operating, producing, etc.
How is this not a threat to us?
Let's say an AI entity, sentient or not, becomes better than humans at calculating, predicting, logical reasoning, analyzing, strategizing, operating, producing, etc.
How is this not a threat to us?
That's one of the concerns of people in the AI/Robotics industry i.e., the AI becoming aware at some point that it is "different" from humans, and may stop taking instructions from their human operators. Neurolink is one of the ways people think we would be able to control/manage AI i.e., actively engaging with them so they don't get an edge over humans.
A "company" is considered as a "person" in almost all countries for legal, tax, and other compliance purposes. This has been going on for almost 60 years now, so it's not an outcome of AI per se. A company still needs a "natural human" as a director/board member etc. who needs to sign off on company documents and declarations etc., so AI setting up a company using "virtual humans" is still far-fetched.