In 2012, Elon Musk was in deep shit. Tesla was still only producing hundreds of cars in a year and marvellously unprofitable with no respite in sight. The company had IPO’d which made its internal affairs public and led to more questions than answers.
SpaceX was neither dying nor staying alive. Despite landing contracts from NASA that were bankrolling the development of the company, the missions were not ramping up. 2011 had seen zero launches.
It was against this backdrop that he started digging through the trove of ideas from the last 100 years and resurrecting many of them. Amongst them were Underground tunnel transportation (The Boring Company), magnetic levitation trains (Hyperloop), Brain-Machine Interface (Neuralink) and AI (OpenAI).
All of them were lauded by the press as moonshots and since he was trying to build two companies that seemed like impossible dreams, it bought him cult status and got him a fan following. He never fully immersed himself in any of these projects because he was having a hard enough time keeping Tesla and SpaceX afloat.
Sam Altman took the helm of OpenAI and started turning it into a venture. Elon Musk was completely out of the picture.
Between raising capital constantly, going up against Boeing and Lockheed Martin for NASA contracts, cavorting with Jeffery Epstein, and constantly lying about launch dates for Tesla models, his hands were full!
10 years later OpenAI is an overnight success. Microsoft threw in $10 Billion at a $30 Billion valuation. Elon wishes he had shown up for a few meetings, but he did not.
So now…
Billionaire Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang joined hundreds calling for a six-month pause on AI experiments in an open letter — or we could face “profound risks to society and humanity.”
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the open letter, posted on the website of Future of Life Institute, a non-profit. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
Source: CBS
Because 6 months is all it takes to solve the whole AI problem.
The first attempt is the hardest. Amazon took years to figure out e-commerce, today absolute novices can launch one with less than a thousand dollars in investment. OpenAI took 8 years to build. Today, the Facebook LLaMa framework is available as open source, all you need is data and server capacity.
The data to train ChatGPT was simply “Robbed” off the internet, everyone can do the same.
It is just that he regrets having missed an opportunity and if given the time could work on a competing framework and try to overcome the headstart that OpenAI has. In fact, the 6 months is a bait to see if anyone would bite, if they do, then you can go on extending it till one is at par.
Apple launched the iPhone and killed its iPod business completely. Google had the capability to launch something like ChatGPT but kept punting it because it would take a chunk out of its Ad business.
In the book Build, Tony Fadell describes the state of affairs inside Google which is made possible by their Ad business which is like a bottomless well of money. At Nest, the average cost per employee went up from $150,000 to $450,000 once they accommodated all of the perks that come with being a part of Google. Nest had no way of becoming profitable after that.
Why would Google kill the Goose that lays the golden egg?
2022 changed that.
Now, their hand is being forced by ChatGPT. All the useless moonshot factories are going to go and they will be forced to focus on building this line of business.
Going back to Sam Altman.
He went in front of a Senate Committee and begged them to regulate AI.
Sam Altman: Thank you for the question, Senator. I, I don’t know yet exactly what the right answer here is. I’d love to collaborate with you to figure it out. I do think for a very new technology, we need a new framework. Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well. And how we want and, and also people that will build on top of it between them and the end consumer. And how we want to come up with a liability framework, there is a super important question. And we’d love to work together.
[…]
Sam Altman: Can I weigh in just briefly? Briefly, please. I want to echo support for what Mr. Marcus said. I think the US should lead here and do things first, but to be effective we do need something global. As you mentioned, this can, this can happen everywhere. There is precedent. I know it sounds naive to call for something like this, and it sounds really hard. There is precedent. We’ve done it before with the IAEA. We’ve talked about doing it for other technologies. They’re given what it takes to make these models: the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies. I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face, like an impractical idea. And I think it would be great for the world. Thank you, Mr. Chairman.
[…]
Sen. Dick Durbin: Thank you. I think what’s happening today in this hearing room is historic. I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them. In fact, many people in the Senate have base their careers on the opposite that the economy will thrive if government gets the hell out of the way. And what I’m hearing instead today is that ‘stop me before I innovate again’ message. And I’m just curious as to how we’re going to achieve this. As I mentioned section two 30 in my opening remarks, we learned something there. We decided that in section two 30 that we were basically going to absolve the industry from liability for a period of time as it came into being. Well, Mr. Altman, on the podcast earlier this year, you agreed with host Kara Swisher, that section two 30 doesn’t apply to generative ai and that developers like OpenAI should not be entitled to full immunity for harms caused by their products. So what have we learned from two 30 that applies to your situation with ai?
Source: Techpolicy
Do you know what happens when an industry is regulated?
It becomes impossible for the small guy to start from behind and move ahead. The regulations become a burden that cannot be overcome. This is precisely why you do not have someone just starting a bank or a hospital or an aeroplane manufacturing company.
It takes a lot of cash to overcome the regulatory hurdles just to get started. Even startups funded with Billions of dollars, operating in the fintech space, do not want to start a bank. That is what regulation does to an industry.
Apple partnered with Goldman Sachs to run their Card and Savings Account business! Just for perspective, this is a company whose valuation is comparable to the GDP of India; 75% there. Their revenues and profits are $400 billion and $100 billion a year respectively. And they have more cash than many banks in the US.
Sam Altman wants regulation because he has committed every copyright fraud in the book to set up his venture. He can afford to make it onerous for those who are planning to get started now.
So a lot of statements are being made comparing ChatGPT to Nuclear Weapons and declaring it just as dangerous. If our experience with Nuclear Weapons is anything to go by and ChatGPT is really going to morph into some supreme intelligence, we should dismantle it and bury it, no?
No. We should have a small coterie regulate it!
And the IAEA *Slow-Clap*
The IAEA was set up to make it impossible for any government to operate without someone behind them peering down their necks right up to their underwear.
In 1953, U.S. President Dwight D. Eisenhower proposed the creation of an international body to both regulate and promote the peaceful use of atomic power (nuclear power), in his Atoms for Peace address to the UN General Assembly. In September 1954, the United States proposed to the General Assembly the creation of an international agency to take control of fissile material, which could be used either for nuclear power or for nuclear weapons. This agency would establish a kind of “nuclear bank”.
The United States also called for an international scientific conference on all of the peaceful aspects of nuclear power. By November 1954, it had become clear that the Soviet Union would reject any international custody of fissile material if the United States did not agree to disarmament first, but that a clearinghouse for nuclear transactions might be possible. From 8 to 20 August 1955, the United Nations held the International Conference on the Peaceful Uses of Atomic Energy in Geneva, Switzerland. In October 1957, a Conference on the IAEA Statute was held at the Headquarters of the United Nations to approve the founding document for the IAEA, which was negotiated in 1955–1957 by a group of twelve countries. The Statute of the IAEA was approved on 23 October 1956 and came into force on 29 July 1957.
Source: Wikipedia
The IAEA is a body meant to ensure nuclear plutocracy. Those 12 countries were somehow considered righteous enough to hold weapons and others not.
This is exactly what Sam Altman is proposing. He wants a small number of people to be able to determine what is right and wrong in AI. A cartel.
While the American senators were effusive in their praise, the story was much different across the Atlantic.
As part of its rule-making, the EU is seeking to implement transparency measures in so-called general-purpose AI. “Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training,” the European Parliament noted on May 11.
The need for transparency over the data collected to train the algorithm has long been a concern for regulators in European country—that was the basis for Italy’s temporary ban on ChatGPT in March. Altman isn’t making any promises on that front. “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible,” Altman said, according to Time.
Source: Quartz
The EU regulators are not eating out of his hands. Generally, the EU is not kind towards tech companies from the US. They feel that these organisations from the US have acted in a manner which has sabotaged their European counterparts. Also, European governments are not bought and sold by lobbyists the same way that the US government is.
Sam Altman said he will pull out of Europe and then swiftly recanted his statement. Facebook was recently fined $1 billion and it will continue to operate in the EU. There are 11 billion other reasons for it.
OpenAI is not the gift from the gods that it is being made out to be, but while the hype is around, Sam Altman is going to try to build an AI Cartel that he leads and controls. Will everyone else sit around twiddling their thumbs? Seems like it!
Leave a Reply