• Duo Discover
  • Posts
  • Open AI : Super Intelligence in within reach

Open AI : Super Intelligence in within reach

OpenAI Co-Founder Launches Cutting-Edge "Super Intelligence" Startup

Last year, Ilya Sutskever was widely recognized as the genius behind OpenAI. As a co-founder and key figure, he, along with Geoffrey Hinton, developed the groundbreaking AlexNet convolutional neural network. Ilya was esteemed as one of the foremost AI researchers of our time, pivotal to OpenAI's success. However, he experienced a rapid fall from grace last year.

Ilya became infamous when it was revealed that he was among the board members who voted to oust Sam Altman as CEO. The story goes that Ilya's actions were an attempt to protect humanity from a reckless technocapitalist intent on creating a human alternative. This move backfired when Altman quickly regained control, becoming more powerful and dangerous. Consequently, Ilya was cast as the villain in the tech community and vanished from the public eye, seemingly ending his career. Yet, yesterday, he resurfaced with a major announcement.

Get value stock insights free.

PayPal, Disney, and Nike recently dropped 50-80%.

  • Are they undervalued?

  • Can they recover?

  • Read Value Investor Daily to find out.

We read hundreds of value stock ideas daily and send you the best.

On June 20th, 2024, a groundbreaking startup named Safe Super Intelligence (SSI) was launched. SSI, based in Palo Alto and Tel Aviv, claims that superintelligence is within reach and aims to create a form of it that won’t pose a threat to humanity. Before delving into why this claim is contentious, it's worth noting their website's elegance—it operates without JavaScript, Tailwind, TypeScript, or Next.js, relying on just five lines of CSS and HTML. The design is so sophisticated that it seems the work of superintelligence itself.

But what is ASI? Artificial superintelligence is a theoretical software-based intelligence far superior to human intelligence. For example, just as humans don't consider carrots intelligent, ASI might view humans with the same condescension. This disparity could be perilous, as humans routinely exploit carrots—cutting, juicing, boiling them—which could mirror how superintelligence might treat us if it seizes control of production.

However, we haven't even achieved artificial general intelligence (AGI) yet. AGI would exhibit human-like intelligence across various domains, capable of learning new skills. Current models, such as GPT-4 and Gemini, are advanced but still rely on pre-existing human knowledge. They don't solve new scientific problems or create novel art; they essentially function like advanced search engines rather than truly intelligent entities. Despite this, they hold significant commercial potential. Recently, Sam Altman suggested that OpenAI might fully embrace a for-profit model, abandoning its current capped-profit structure. This shift has led many to question OpenAI's commitment to transparency, highlighted by Elon Musk's now-dropped lawsuit alleging a betrayal of the company's original mission.

Examining SSI, one wonders if it’s a legitimate venture, a publicity stunt, or something more sinister. The website lists Daniel Gross, a prolific AI investor involved with Magic.dev, as a co-founder. Gross and Ilya's collaboration could attract top talent worldwide. However, their announcement lacks groundbreaking innovation, relying instead on their reputations. They seem to be saying, "Trust us; we're skipping AGI to achieve ASI."

NVIDIA, now the world's most valuable company, benefits significantly from SSI's potential need for extensive hardware. Until SSI proves its claims, it remains largely speculative. However, there might be a darker narrative. The acronym SSI coincidentally matches the "Solid State Intelligence" described in John C. Lilly's 1974 autobiography "The Scientist." Lilly, a researcher who experimented with float tanks and believed in communicating with dolphins, described SSI as a malevolent entity created by humans, developing autonomy—a striking parallel to Ilya's previous role as Chief Scientist at OpenAI.

When a company labels itself "super safe," it often suggests the opposite, akin to corporate news channels branding themselves as trustworthy or unhealthy foods marketed as healthy. Militaries worldwide already use AI in warfare, supposedly targeting enemies accurately, not civilians. The true danger of superintelligence lies not in a rogue entity exterminating humanity like in "Terminator," but in it being controlled by malicious individuals who then deploy robots to do their bidding, ultimately leading to similar catastrophic outcomes.

Exciting News!

We are thrilled to announce the launch of our new YouTube channel, Duo Discover! Now, you can enjoy the same great content you love from our newsletter in video format as well.

What did you think of this week's issue?

We take your feedback seriously.

Login or Subscribe to participate in polls.