The Concerns and wonders Of OpenAI. It’s like fire 🔥Dec 09 2022

Rob Tyrie
7 min readDec 9, 2022
Photo by Ricardo Gomez Angel on Unsplash

I Know. I definitely agree with you. OpenAI is amazing. And a Toy. And Power tools. It’s like fire. It’s like a weird brain. It’s an extension of brains. It is rewriting history literally with humans. It will probably change a lot of things on earth and only millions of people are trying for now. Billions of people do not have access to it. But as they say, the future is here now, it’s just not evenly distributed.

OpenAi is wrong a lot, because of its design and its incomplete data sources and it is authoritative by design. That’s a bad mix. It’s worse than “mansplaining".

Let’s be clear. It is not, human, sentient, or caring or intelligent. You can ask to write a love poem to you, based on how you feel, but it does not love you. It will lie to you but not maliciously. It’s a machine. Treat it like that.

OPENAI is a technology. In the hands of bad people and stupid ones — Awful things will happen.

The stuff works. Generation and displayed conversation based on well understood and popular forms and formats.

It's powerful and useful now and will get exponentially more powerful as more sensor data, human data, memory and compute is thrown at it. It's a phenomenon. That not a good or bad thing, it's a powerful thing.

Prediction: the code it generates now will get better very fast. You should have seen the state of the art 2 years ago. It's was beyond crap. Did you know the production of code or html from GPTs was an unintentional side effect? It was the same with all the bad poetry. Unintended.

It is likely more thing will be added to OpenAI and it's competition. Like, adding in autonomous robots of all kinds. That's a very dangerous, powerful thing.

Add in autonomous communications and energy creation. Very powerful, if used kindly and politely for good.

I can see a hundred ways to make money and make things better than I have before in my career from writing code, to systems design, to doing audits, doing formal research and writing analytical reports, even to teaching.

The capabilities will grow fast. Experts will become more expert and mundane, bullshit jobs will go away. New ones like "prompt wrangler" , "prompt programer" and "generated-fact checker" will grow. And so, so many lawyers.

There are already prompt guide books that will be written and there will be as many of those as there were the "how to build a web site from html" books to BTC self help guides and what ever Don Tapscott wrote about. There will be "Prompts for Dummies" books in different disciplines.

There will also be novel synthesis and refinement of alot of things that were never conceived before that will be empirically proven to be safe and better that what existed before. We will find new applications for existing drugs and tech that areas harmful that what we do now. The college essay and book report may be dead, but our kids are amazing given good tools and teaching, oh what things they will make with powerful tools.

Truth be told, humanity can’t be passive and wide-open about this data machine and it’s algorithms. You can’t either.

I hope there will be rules and ethics applied like, if you use it it has to be declared and the human of the business takes responsibility for what is published or claimed.

That’s an easier one. It’s still a struggle until there are new laws and the courts and the jurisdiction thing is fixed.

I am hopeful but it is "critical hope" . I am not highly and stupidly optimistic.

There better be an ombudsman when some steals your IP, and good liable and slander laws if some chooses to wreck your image or reputation.

There better be privacy.

There better be intellectual property rights and clear attribution rights for creators.

There better be access for all to generative transformers. They should be treated like The Commons. They should not be treated like private playgrounds or banks.

The Creators and Creatives have to protected and paid or this infinite monkey machine will break down or be taken down. We have to be paid and respected as creators, executors and as those who take risks and responsibilities as humans. We have to have basic rights and freedoms in the face of all automated uncontrollable algorithms.

Like the people who what to declare lakes and river as human to give inanimate things rights, there will be a faction that want to treat AIs as humans so they can be held accountable by law. This was attractive and we used that approach for corporate entities, but that used to work but does not work any more. These ideas have been explored in a book called Life 3.0 that is worth reading. It's written by physicists and explores the legal concerns of autonomous robots. However, it was written before the GPT era.

The scientists and zota-rich, have a lot of power in this construct. Probably too much power. We have seen how that has played out in past civilization, the UK and is playing out in this pandemic.

One of the big, big problems I see is when machine to machine connections are made and things like banking credit limits are affected or credit scores. Government should worry how energy, education and transportation are automated with this technology.

These will be under the covers and if not controlled and made transparent and auditable and explicable, bad things will happen.

You will lose your job and the factory will shut down and the wrong political party will be empowered to make small sects and tribes miserable.

There is a possibility and a desire to generate stuff with the generator... This has already been shown to lead to awful things, like those digital pictures generated with the First GANS and RNN algorithms that looked like nightmares or that racist chat bot that Microsoft released and had to kill.

This stuff now that it is conversational will likely be worshipped by some and turn into some kind of eerie religion and if that gets powerful, it does not take a average historian to see how that will go. That would be bad.

It will be the voice inside people’s heads. We have already seen people claim this machine is sentient. The "Eliza" and "Turing" effect are real. Some people, especially concrete thinking conservatives will try to treat this type of machine as a person. That’s not good.

Humans anthropomorphize things that talk and happen to look like us. We see this is cartoons, sacred rocks like Indian Head or Mountains like Sleepy G in Thunder Bay. It even happen when people see giants in the clouds. And, by making them converse, people will get confused about these new machines and algorithms.

Infinite smart monkeys can be good or they can wreck a car or plane or factory.

It is, however, inevitable that it will be used. Solves too many problems. Just like CRISPR and the current Internet. The genie is out of the bottle and Pandora's box got opened.

This type of machine should not be connected to any financial or political system. Or any mass entertainment system. But I think that it is to late and it worries me. Movie scripts are already being made.

It will have to be regulated and individual freedoms will have to be protected, until it takes over or some corporate entity or state and then we work for them... Like we do now.

I hope those in power, genuinely like humans, and no one asks for the planet to be a self sustaining eco-park.

One will have to be careful what one prompts for.

It's just a machine made by humans. A tool. And seriously again, it's a tool for experts and ethical people.

But, we have seen it over and over again from gun powder to engines to nuclear power, tech is dangerous in the hands of the greedy and in the hearts and minds of nasty idiots. We may all be at risk of the people and machine too early on the Dunning Kruger Curve.

Prediction: People and companies will make billions of dollars from OpenAI and/or it's close competors and Quantum computing will be part of it, because they handle probability faster than any other tech in history so far. It's likely because Google is the richest company in the world, based on Search, Organized Data and, ugh, ads.

Who owns OpenAI and and how do the benefit? Who is Sam Altman?

Why is he smarter than Elon?

Why has Microsoft written checks for Billions of dollars to OpenAI. What does it hope to gain other than profit?

I know Elon is fighting with OpenAI. Why? And why isn't Zuckerberg saying anything about it? We have lots to talk about. I have more questions.

Well. In a nutshell. I am worried. I am worried for my kids. We all should be. And hopefully people more powerful than us will feel that way and we can help convince them to be good and humane.

I hope things go well.

Or we are in "what hath G*d wrought" territory.

On a dark and stormy night...

--

--

Rob Tyrie

Founder, Grey Swan Guild. CEO Ironstone Advisory: Serial Entrepreneur: Ideator, Thinker, Maker, Doer, Decider, Judge, Fan, Skeptic. Keeper of Libraries