#2 – Non-human Powers of AI
In AI Simplified episode two, we chat about all the non-human powers of AI and implications for organizations and society.
- How common AI & human behaviors compare
- How to implement AI
- AI consciousness
- What does this all mean for our society
Skip, recap, or review with timestamps:
0:39 Non-Human Powers of AI
6:17 Data AI Works With
11:17 Makeup of Human Mind & Narrow AI Algorithms
16:05 Where To Start With Non-Human Powers of AI
22:35 AI & Fake Consciousness – Regulating Human Minds
31:20 Don’t Make Your Roomba Scream
35:22 AI & Mimicking Consciousness – Slippery Slope
38:44 What Does This Mean For Our Society
45:00 Extended Minds & Human-Centered AI
What are the non human powers of AI?
It is always on with whatever it is doing.
By design, AI does not have to go back home at 5pm or 6pm or whatever time most human’s end their work day.
It is very much 24-7 in nature and also global in nature. So, AI has the ability to continue to do its work, whatever the work may be, and then make the decisions that it needs to. This includes learning from new data that is coming in as well.
This is a hugely powerful non-human feature of AI, as it relates to people and our common knowledge of work.
We, humans, mostly divide the day into three eight hour chunks: sleeping, working, and non-working (which can also include learning or improving).
The beauty of AI is it can do all three of these things we know about our day in tandem. Doing the work and then getting better at what it needs to do with new learning, it can do this in parallel.
And it can do this non-stop. In comparison to people, we have to do these chunks sequentially. We can work or sleep, but not both at the same time.
Each component of the day is very important. Hence, to say AI, “it’s always on it”, seems like a trivial, a cliche thing to say, but it really is more meaningful than just that fact.
You can have so many different manifestations and instances of AI running, in different environments, all at the same time.
Thus, it’s not only always on with whatever it is doing. But, it’s simultaneously getting better at working, both at the same time among many instances, globally. This is a very non-human power of AI.
The AI of today is not conscious. The way we, as people, are conscious. We exist. It is kind of what Descartes was referencing, you know, I think therefore I am.
Consciousness is this quality of felt experience. We have these components of consciousness, to be ourself aware. We can reflect on and know if something feels like pain, if feels real right.
Redness or rose, that redness or color, it feels like something right there. It is a subjective first person, private experience, that is very much your own right to be, to feel. There is a sense of ownership and in your body, a known sort of experience.
We are creatures capable of joy and pain and with that comes a lot of ethical and moral obligations.
Luckily, the AI of today, is not conscious. In the moment that AI becomes conscious, passes some test for consciousness, well then, all kind of ethical questions that apply to people, implications for caring for human well being around joy, pain, suffering and quality of life, they would apply to the AI as well. You can’t just turn it on and turn it off at will.
You can’t send an AI into hazardous environments, life or death environemnts, if it is conscious and sentient.
We want to treat other conscious beings in the same way, we want to treat ourselves, hopefully.
What is the future of AI?
One thing that I (Amjad) foresee in terms of the next three to five to seven year time horizon is related to this whole sensory modality, perceptual-sort-of-computing front.
First of all, we will talk about the human brain on our podcast. Specifically, about the way our brain is wired because this relates to AI’s capacity now and its growth potential. Why we focus on human-centered artificial intelligence is going to become clearer and clearer as we go deep into these conversations.
Don’t miss an episode. Subscribe today.
Currently, if we feed our brain more sensory data, it has the power to start perceiving this data and the power to start giving it meaning. Hence, today in different labs, all kinds of experiments are being done to add more sensory modalities to humans. As people, for example, we have a lot of skin surface area. Not all of it is used by the brain all the time, so part of it can be used as we are getting into this parable computing age for extra sensors.
Extra Brain Sensors?
I’ll just give a very fun, nerdy example. There are sort of add ons (not surgical ones!), just add ons like a smartwatch can help teach the brain to become a GPS. The watch starts sending a signal about north, south and east and west and, before you know it, our mind is trained. We know or we are consciously aware of what direction we are looking. i.e. I am looking at southeast and when I check my watch, the sunset is spot on.
If we look at the neocortex of a human brain, you say, “well, it’s just one sheet, which is few centimeters and thickness.” It has an interesting geometry to fit into our skull. Pretty much it looks the same every year, which means there are no special algorithms going on. All those neurons, billions of them in our neocortex, they’re doing the exact same thing. So that means if the part that is being used to see, if we start sending it some other data, it will start learning that data as well.
Bringing it back, it looks like even in our brain, it’s just one algorithm. One, not two not three not 500, it looks like it’s only one algorithm.
However, this sensory data is where the intelligence is. It is how it is coded and modulated, and now how the brain is fed.
Narrow AI can sense lot more.
And then, as we make its algorithm, its perceptual algorithms better and better, just think about the possibilities.
Now, if we combine that with us, as people, whether with extra sensors, whether through smart glasses, we get to a concept that Andy Clark and David Chalmers call the extended mind. We are now sort of extending our mind, because we are able to sense lot more, we are able to perceive lot more.
So, in the next three to five years, we would be able to sense and perceive more as the narrow AI and people come together.
If we couple it right, we have amazing perceptual capability, as people, to perceive lot more.
There is lot of bandwidth there, as Richard Feynman used to say there’s lots of room at the bottom.
We’ll explore that concept in the context of quantum computing and other things in future episodes, but we have an amazing capability to enhance ourselves.
It is fascinating that just around the corner, these non human capabilities are going to become extensions of us as humans.
Where should businesses start with AI?
New AI or narrow AI is still in its early days, so businesses and companies are still in very early stages of technology implementation. Business leaders are all kind of talking to each other about “hey, what does it mean for our business?”.
Any journey starts with a simple step of assessing where you’re standing at the moment. So, in this context, a good starting question would be, “Do we have within our business some AI software, some AI agent?”.
Why a podcast about human-centered AI?
People are either talking about artificial intelligence (AI), machine learning, deep learning, etc. in a very technical way or not at all. Having studied Computer Science and Advanced Analytics and created an AI – advanced analytics supply chain company, I understand first-hand how valuable the field is, but not everyone does. I want to change that. We can all learn principles of AI and implement them in very practical ways. Whether you want to automate a part of your business or create a new business altogether, AI can help you. My goal through this podcast is to simplify AI.
Calling it a podcast gives you a broad understanding of how we’ll communicate, but the format will be much more engaging than just audio. However, you can listen to AI Simplified like a traditional podcast too.
Founder + Chief Executive Officer, Algo