Introverted Forbes Under30 AI Innovator
Wondering if introversion is holding you back from success?
In this episode of Low to Grow, Dr. Joyjit Chatterjee shares how he transformed from a shy introvert into a Forbes Under 30 honoree in Manufacturing and Industry. His journey is a story of empowerment, personal growth, and leadership—showing young professionals that introversion can be a strength, not a limitation.
Joyjit also dives into his cutting-edge research in AI and Explainable AI, breaking down why transparency in technology is crucial for building trust. We discuss how AI can reduce social anxiety, improve mental health through simulations and companionship, and empower the next generation of leaders.
🎙️ In this conversation, you’ll learn:
How to embrace introversion as a tool for confidence and leadership
Why Explainable AI matters for the future of technology
How AI agents and tools can transform everyday life
Ways AI can support mental health and help manage social anxiety
The one mindset shift young people need for authentic growth
Chapters:
00:00 Journey to Forbes Under 30 in Manufacturing and Industry
03:11 Understanding AI: What is Black Box and Explainable AI?
05:41 Current Misconceptions about AI
07:34 Joyjit’s Research in AI for Future Labs
10:05 Explaining AI Keywords
12:45 The Future of AI Agents
15:11 AI in Everyday Life
18:25 AI Tools for Social Anxiety
20:09 Advice for Young People
22:35 Legacy and Impact
25:06 AI and Mental Health
35:32 Outro
Follow me on:
Instagram: @lowtogrowpodcastTikTok and YouTube: @lowtogrowhttps://www.lowtogrow.comSay hi at lowtogrowpodcast@gmail.com :)
Follow Dr. Joyjit Chatterjee:LinkedIn:http://linkedin.com/in/joyjitchatterjee/ Website: https://www.hull.ac.uk/staff-directory/joyjit-chatterjee
Please Note: This podcast is for educational purposes only and should not replace professional medical or mental health advice. If you’re struggling, please consult a qualified professional. Free resources are available at https://www.mind.org.uk/
00:00:00,040 --> 00:00:02,640
There are so many young people
in the world who are introvert
2
00:00:02,720 --> 00:00:06,360
who think often before they
actually take any action, before
3
00:00:06,360 --> 00:00:09,280
they actually kind of speak,
speak up in front of others.
4
00:00:09,280 --> 00:00:12,200
So there is that kind of
hesitation which might come
5
00:00:12,200 --> 00:00:15,480
across as a lack of confidence,
but actually a thinking more
6
00:00:15,520 --> 00:00:18,960
before you speak up.
That is actually a quality which
7
00:00:18,960 --> 00:00:20,920
you can leverage to do things
like.
8
00:00:21,360 --> 00:00:24,400
Welcome to Low to Pro, the
podcast transforming life's
9
00:00:24,400 --> 00:00:27,000
toughest moments into
opportunity for growth.
10
00:00:27,720 --> 00:00:31,160
I'm Annie, a Folks under 30
technology founder who's
11
00:00:31,160 --> 00:00:33,840
entrepreneurship journey brand
parallel to a mental health
12
00:00:33,840 --> 00:00:37,520
awakening.
In every episode, I sit down
13
00:00:37,520 --> 00:00:41,120
with inspiring individuals and
delve into how they managed to
14
00:00:41,120 --> 00:00:44,080
turn their personal or
professional challenges into
15
00:00:44,080 --> 00:00:47,600
opportunities for growth.
If you're facing uncertainty in
16
00:00:47,600 --> 00:00:51,120
your life, feeling down, or
simply need a kick of
17
00:00:51,120 --> 00:00:54,840
inspiration to keep moving
forward, this is your space for
18
00:00:54,840 --> 00:00:57,960
the honest and uplifting
conversations that you will want
19
00:00:57,960 --> 00:01:00,680
to hear.
Hit follow so you never miss an
20
00:01:00,680 --> 00:01:05,640
episode and let's dive in.
Today on Lotic Road, we're
21
00:01:05,640 --> 00:01:09,360
joined by someone whose story is
as humbling as it is inspiring.
22
00:01:09,920 --> 00:01:14,080
Dr. Georgette Chatterjee was
once a quiet, shy kid, often
23
00:01:14,080 --> 00:01:16,240
underestimated by classmates and
adults.
24
00:01:16,840 --> 00:01:20,320
He was even told that he lacks
what it takes to survive in the
25
00:01:20,320 --> 00:01:23,520
real world.
But through personal patience in
26
00:01:23,520 --> 00:01:27,280
a resilience and a quiet
determination, George, it proved
27
00:01:27,280 --> 00:01:30,440
that being an introvert is
actually a superpower rather
28
00:01:30,440 --> 00:01:32,880
than a weakness.
Now, George.
29
00:01:32,880 --> 00:01:36,560
It is a Forbes on the 30 honoree
in Manufacturing and industry,
30
00:01:37,040 --> 00:01:40,840
also working as a lead data
scientist at EPAM and is a
31
00:01:40,840 --> 00:01:43,200
global speaker on AI and
sustainability.
32
00:01:43,960 --> 00:01:47,360
He has worked on smart factories
across continents, earned
33
00:01:47,360 --> 00:01:51,080
international recognition for
his research and also shared
34
00:01:51,080 --> 00:01:55,280
stages with Fortune 500 leaders,
all the while staying grounded.
35
00:01:55,880 --> 00:01:59,680
Today is a conversation about
the power of patients listening
36
00:01:59,680 --> 00:02:03,200
deeply and just letting your
work speak louder than your
37
00:02:03,200 --> 00:02:06,360
voice ever could.
Georgette, I am delighted to
38
00:02:06,360 --> 00:02:09,840
have you here on Low to Grow.
Who do you think will benefit
39
00:02:09,840 --> 00:02:12,600
the most from listening to our
conversation today?
40
00:02:13,480 --> 00:02:16,000
There are so many young people
in the world who kind of are
41
00:02:16,000 --> 00:02:19,640
introvert, who kind of think
often before they actually take
42
00:02:19,640 --> 00:02:23,240
any action, before they actually
kind of speak, speak up in front
43
00:02:23,240 --> 00:02:25,160
of others.
So there is that kind of
44
00:02:25,160 --> 00:02:28,200
hesitation, which might come
across as a lack of confidence.
45
00:02:28,200 --> 00:02:31,040
But actually it's like you're
just trying to be humble.
46
00:02:31,040 --> 00:02:34,120
If you are trying to be grounded
and if you are trying to give
47
00:02:34,120 --> 00:02:37,440
others the stage before you kind
of jump in and make your views
48
00:02:37,440 --> 00:02:40,520
kind of evident, I think that's
a lot of young people these
49
00:02:40,520 --> 00:02:42,760
days.
They would understand this, that
50
00:02:42,800 --> 00:02:47,640
it really helped to understand
that being introvert thinking
51
00:02:47,640 --> 00:02:51,400
more before you actually take
any steps further, before you
52
00:02:51,520 --> 00:02:54,960
speak up, that is actually a
quality which you can leverage
53
00:02:54,960 --> 00:02:57,840
to do things like research, do
things like innovation.
54
00:02:57,840 --> 00:03:01,840
So how young professionals can
actually transition from the
55
00:03:01,840 --> 00:03:05,680
society which might treat it as
a lack of confidence to actually
56
00:03:05,680 --> 00:03:08,200
something, as you mentioned,
like a superpower which will
57
00:03:08,200 --> 00:03:10,960
actually help you transition and
grow in your career.
58
00:03:10,960 --> 00:03:13,160
So yeah.
Wonderful.
59
00:03:13,160 --> 00:03:15,640
Thank you.
I am very excited to delve deep
60
00:03:15,640 --> 00:03:18,240
into that with you.
Let's start at the beginning
61
00:03:18,240 --> 00:03:20,280
then.
So JoJo, you've spoken about
62
00:03:20,280 --> 00:03:23,400
being shy and introverted when
you were a child.
63
00:03:23,800 --> 00:03:27,840
How do you feel that those early
experiences actually shaped the
64
00:03:27,840 --> 00:03:31,160
way that you approach
communication and leadership in
65
00:03:31,160 --> 00:03:34,240
your work today?
I would say, you know, like
66
00:03:34,280 --> 00:03:39,360
during my childhood, because
that's where most of our life is
67
00:03:39,360 --> 00:03:41,600
shaped, right?
When we are kids, that's where
68
00:03:41,800 --> 00:03:43,640
the beginning of our life is
shaped.
69
00:03:43,640 --> 00:03:47,160
I was very shy, like really,
really shy, really quiet.
70
00:03:47,160 --> 00:03:50,200
I would hesitate to even ask
simple things in front of
71
00:03:50,200 --> 00:03:54,160
teachers and to speak up when I
maybe was struggling to
72
00:03:54,160 --> 00:03:56,120
understand something on stuff,
stuff like that.
73
00:03:56,120 --> 00:04:00,080
So I wouldn't really be that
kind of outgoing in terms of my
74
00:04:00,080 --> 00:04:02,640
personality.
So that was something that was
75
00:04:02,640 --> 00:04:05,240
by nature only, you know, like
it, it wasn't something that I
76
00:04:05,240 --> 00:04:07,040
had kind of learned or anything,
obviously.
77
00:04:07,040 --> 00:04:09,480
It was something that my
personal nature was always like
78
00:04:09,480 --> 00:04:13,840
that I used to speak whenever I
felt like this is like the right
79
00:04:13,840 --> 00:04:15,320
movement.
These are the right people.
80
00:04:15,320 --> 00:04:18,440
Like a comfort zone, right?
I had a particular comfort zone.
81
00:04:18,440 --> 00:04:21,880
Maybe it might be my parents,
maybe some best friends whom I
82
00:04:21,880 --> 00:04:25,880
would be most comfortable in
speaking to, but not really to
83
00:04:25,880 --> 00:04:28,960
strangers, not really to new
people, students, other
84
00:04:28,960 --> 00:04:31,560
classmates, especially the ones
who are more outgoing.
85
00:04:31,560 --> 00:04:34,880
They would mock me and they
would tell me this nature, it
86
00:04:34,880 --> 00:04:36,640
won't help you in your future
career.
87
00:04:36,680 --> 00:04:40,280
It won't help you in your life
because really, if you don't
88
00:04:40,280 --> 00:04:43,840
speak up, if you can't like put
in your views or if you don't
89
00:04:43,840 --> 00:04:47,280
ask questions, then you you will
struggle to grow in your life.
90
00:04:47,320 --> 00:04:50,280
It came across as something that
was more destructive, you know,
91
00:04:50,280 --> 00:04:53,600
like it, it would actually
hamper and it, it could lead you
92
00:04:53,600 --> 00:04:56,440
to depression, it could lead you
to anxiety and stuff like that.
93
00:04:56,440 --> 00:05:02,160
So that was kind of, you know,
like the moment that made me go
94
00:05:02,160 --> 00:05:05,320
into the low point of my life,
like during childhood when I
95
00:05:05,320 --> 00:05:10,000
realized that, OK, like, society
views introspection and
96
00:05:10,160 --> 00:05:14,320
quietness as a weakness rather
than something that is just
97
00:05:14,320 --> 00:05:18,240
coming across from our nature,
which we can't really change
98
00:05:18,240 --> 00:05:21,480
when we are kids.
Do you feel that there was a
99
00:05:21,480 --> 00:05:25,840
turning point where you actually
realize that you're naturally
100
00:05:25,840 --> 00:05:28,920
quiet?
Nature was actually an advantage
101
00:05:28,920 --> 00:05:31,880
and a strength that you have.
Yes.
102
00:05:31,920 --> 00:05:36,520
So I would say that once I moved
to university, that was the
103
00:05:36,920 --> 00:05:40,680
point when I realized that there
is a bigger picture behind this
104
00:05:40,680 --> 00:05:44,240
nature, which means that I have
got certain skills which some
105
00:05:44,240 --> 00:05:48,600
other people might not have in
terms of giving others the
106
00:05:48,600 --> 00:05:52,360
opportunity to speak up rather
than being the loudest person in
107
00:05:52,360 --> 00:05:54,920
the room.
It makes me understand things
108
00:05:54,960 --> 00:05:58,320
deeper and I can go into the
intricacies into the details
109
00:05:58,320 --> 00:06:01,400
which other people cannot.
So that was where I got really,
110
00:06:01,400 --> 00:06:04,880
really interested into research
right from my undergrad, right?
111
00:06:04,880 --> 00:06:08,680
So as soon as I moved into UNIF
in my undergrad, I kind of
112
00:06:08,680 --> 00:06:12,280
published quite a few research
papers, which is quite rare
113
00:06:12,360 --> 00:06:14,960
there for undergrad students.
And that got me interested in
114
00:06:14,960 --> 00:06:19,560
participating in conferences and
giving presentations, stuff like
115
00:06:19,560 --> 00:06:21,480
that.
And obviously that led to the
116
00:06:21,480 --> 00:06:25,280
PhD and then even more
upliftment as a researcher, as a
117
00:06:25,280 --> 00:06:29,080
scientist.
So yeah, I think that low point
118
00:06:29,080 --> 00:06:32,600
in itself, it became like the
strength when I realized that,
119
00:06:32,600 --> 00:06:36,120
OK, this is something that could
be a trait of a scientist
120
00:06:36,200 --> 00:06:39,360
growing up in their life.
That's so interesting.
121
00:06:39,480 --> 00:06:42,920
And I remember you actually told
me when you were a child.
122
00:06:43,200 --> 00:06:46,040
Other adults told your parents,
or rather they asked your
123
00:06:46,040 --> 00:06:49,440
parents, how will you be able to
survive in the real world?
124
00:06:50,640 --> 00:06:54,720
Do you remember as a child when
you heard that, how you actually
125
00:06:54,720 --> 00:06:57,120
felt?
Oh yeah, it was terrible.
126
00:06:57,240 --> 00:07:00,880
It felt like I couldn't do
anything literally for days that
127
00:07:00,880 --> 00:07:04,760
that thing was in my mind for
like several days.
128
00:07:04,760 --> 00:07:08,040
Like I even like struggle to eat
honestly, like if if someone
129
00:07:08,040 --> 00:07:10,520
tells you that right, like when,
especially when you are a kid,
130
00:07:10,520 --> 00:07:15,240
when you don't know how to
tackle these things.
131
00:07:15,240 --> 00:07:20,320
Any negative perception that
comes across from not just other
132
00:07:20,320 --> 00:07:23,320
children, other students, but
even teachers, right?
133
00:07:23,320 --> 00:07:26,520
I, I think the sad thing is even
teachers thought that, OK, this
134
00:07:26,520 --> 00:07:31,600
is like something that would be
negative, negative thing enjoys
135
00:07:31,600 --> 00:07:33,920
it.
But fortunately my parents were
136
00:07:33,920 --> 00:07:36,960
quite supportive.
My grandfather, who is no more,
137
00:07:36,960 --> 00:07:39,880
but he was like, he was very
much like me.
138
00:07:39,880 --> 00:07:42,640
I think like, so maybe it comes
from genetics as well.
139
00:07:42,640 --> 00:07:46,640
He was also a writer, author,
you know, like he used to edit
140
00:07:46,640 --> 00:07:49,080
books, he used to write stuff
and stuff like that.
141
00:07:49,080 --> 00:07:52,360
So I think like that quality of
being a writer, being a
142
00:07:52,360 --> 00:07:56,040
researcher and all of that, it
comes from genetics and kind of
143
00:07:56,040 --> 00:07:58,160
my fam family was quite
understanding that.
144
00:07:58,240 --> 00:08:02,520
OK, maybe Joy just grandfather
was also quite quiet and like
145
00:08:02,520 --> 00:08:06,240
him during his all his life.
They gave me the opportunity to
146
00:08:06,240 --> 00:08:09,720
not take that as a negative
thing, but rather take that as
147
00:08:09,720 --> 00:08:11,960
an opportunity to work on
myself.
148
00:08:11,960 --> 00:08:16,280
Maybe this is something that I
can work on and I can start to
149
00:08:16,280 --> 00:08:19,200
speak to strangers.
I can start to collaborate with,
150
00:08:19,200 --> 00:08:23,040
you know, other people during my
education journey and stuff like
151
00:08:23,040 --> 00:08:25,200
that.
So yeah, like the parents helped
152
00:08:25,200 --> 00:08:28,680
me a lot in recognizing this as
something that I could actually,
153
00:08:28,800 --> 00:08:32,480
you know, like, improve.
Based on your experience, then,
154
00:08:32,480 --> 00:08:35,640
Georgia, to someone who was a
quiet child, if some of our
155
00:08:35,640 --> 00:08:39,640
listeners out there have young
children and they notice that
156
00:08:39,640 --> 00:08:43,400
maybe one or two of their
children tend to be more on the
157
00:08:43,400 --> 00:08:47,280
shy side, so similar to how you
were when you were younger, how
158
00:08:47,280 --> 00:08:51,160
would you advise the parents
then to help their children to
159
00:08:51,200 --> 00:08:54,560
grow out of that shyness or to
really take ownership of that
160
00:08:54,560 --> 00:08:57,640
shyness?
I would say that definitely
161
00:08:57,640 --> 00:09:02,080
don't push, push the children to
like completely change their
162
00:09:02,080 --> 00:09:03,440
nature.
You know, that would be the
163
00:09:03,440 --> 00:09:07,000
very, very first advice for the
parents that let them be their
164
00:09:07,000 --> 00:09:09,120
authentic self.
There is nothing wrong with
165
00:09:09,120 --> 00:09:11,400
being shy.
There is nothing wrong with
166
00:09:11,440 --> 00:09:14,400
being an introvert.
So it it is actually something
167
00:09:14,400 --> 00:09:18,200
which they can leverage as their
biggest superpower, as their
168
00:09:18,200 --> 00:09:21,040
biggest quality as they continue
to grow in their life.
169
00:09:21,360 --> 00:09:24,560
What could be improved is maybe
yes, like speak at the right
170
00:09:24,560 --> 00:09:25,960
moment.
Like instead of being completely
171
00:09:26,600 --> 00:09:29,560
quiet all the time, you know
which I realized that you need
172
00:09:29,560 --> 00:09:30,960
to speak at the right time,
right?
173
00:09:30,960 --> 00:09:33,280
Otherwise will will take you for
granted.
174
00:09:33,280 --> 00:09:36,920
So speak at the right moment,
the right opportunity whenever
175
00:09:36,920 --> 00:09:39,960
you see like, OK, this is the
right point when I need to jump
176
00:09:39,960 --> 00:09:42,680
in, then jump in.
But otherwise, give others the
177
00:09:42,680 --> 00:09:44,640
space.
Also give give others the
178
00:09:44,960 --> 00:09:49,680
opportunity to speak up first
before you kind of jump in to
179
00:09:49,680 --> 00:09:51,800
that.
So I would say that let them be
180
00:09:51,800 --> 00:09:55,400
their real self, Let them be
authentic, but kind of inform
181
00:09:55,400 --> 00:09:59,640
them and help them to become
more outgoing by maybe
182
00:09:59,640 --> 00:10:02,640
encouraging them to participate
in, you know, like events,
183
00:10:02,960 --> 00:10:06,720
encouraging them to go out in
competitions, be it poetry, be
184
00:10:06,720 --> 00:10:10,080
it singing and all of that.
So hobbies, I think they help a
185
00:10:10,080 --> 00:10:11,960
lot.
I used to play guitar and all
186
00:10:11,960 --> 00:10:13,360
when I was, you know, growing
up.
187
00:10:13,360 --> 00:10:16,760
That helped me a lot to come out
in front of the public, right,
188
00:10:16,760 --> 00:10:21,040
and increase my public public
perception in terms of being
189
00:10:21,040 --> 00:10:23,840
someone who is kind of more
interactive with the public and
190
00:10:23,840 --> 00:10:27,520
the society in general.
So yeah, encourage them to do
191
00:10:27,800 --> 00:10:30,200
whatever hobbies your children
are interested in.
192
00:10:30,720 --> 00:10:34,360
Jojit, I know that you were
selected for reforms and a 30
193
00:10:34,360 --> 00:10:36,760
list in manufacturing and
industry.
194
00:10:37,080 --> 00:10:40,080
Could you share with us a bit
more about your journey there?
195
00:10:41,040 --> 00:10:45,240
What took me into Forbes under
30 is definitely my research,
196
00:10:45,320 --> 00:10:48,440
you know, so it started with my
PhD at the University of Hull,
197
00:10:48,440 --> 00:10:52,400
which was in AI for the
renewables industry with like
198
00:10:52,400 --> 00:10:56,280
wind farm operators and
organizations like the Offshore
199
00:10:56,280 --> 00:11:00,760
Renewable Energy Catapult in the
UK to develop AI algorithms that
200
00:11:00,760 --> 00:11:04,360
can perform explainable
predictive maintenance.
201
00:11:04,360 --> 00:11:07,480
So not just like highly accurate
maintenance, but also like you
202
00:11:07,480 --> 00:11:11,200
can get like transparency and
accuracy both in your decisions.
203
00:11:11,200 --> 00:11:15,000
I used variety of concepts
ranging from causal inference to
204
00:11:15,000 --> 00:11:18,720
natural language generation to
knowledge graphs to bring trust
205
00:11:18,720 --> 00:11:21,560
and confidence into the AI
models, which is something the
206
00:11:21,840 --> 00:11:25,800
industry, especially as of
today, it's very, very much
207
00:11:25,840 --> 00:11:28,600
looking forward to.
No one wants to use black box AI
208
00:11:28,600 --> 00:11:30,480
models.
They want trust and confidence
209
00:11:30,480 --> 00:11:34,280
in your decision making.
So that research got quite some
210
00:11:34,280 --> 00:11:36,320
attention.
It was published in lead leading
211
00:11:36,320 --> 00:11:41,120
journals, well sighted.
After my PhD, I continued my
212
00:11:41,120 --> 00:11:43,000
research trajectory in
explainable AI.
213
00:11:43,040 --> 00:11:46,760
I worked at Racket, which is one
of the leading consumer goods
214
00:11:46,760 --> 00:11:50,320
companies, which makes Jet all
eyes all and all those products.
215
00:11:50,320 --> 00:11:54,200
So I worked there and I actually
applied the same explainable AI
216
00:11:54,280 --> 00:11:58,640
foundations into the work in the
factories of the future of the
217
00:11:58,640 --> 00:12:02,320
future, how we can reduce waste
in the factories, how we can
218
00:12:02,600 --> 00:12:05,680
improve or optimize the shelf
life of products that they make
219
00:12:05,680 --> 00:12:10,760
in R&D and those kind of areas.
It kind of got that attention of
220
00:12:10,760 --> 00:12:15,960
Forbes and like I, I think
that's, that's where it led to,
221
00:12:16,080 --> 00:12:19,320
you know, like the manufacturing
and industry category.
222
00:12:19,320 --> 00:12:23,200
And what really helped, I think
to get that attention was my
223
00:12:23,200 --> 00:12:27,360
participation in several
conferences, writing several
224
00:12:27,360 --> 00:12:31,040
research papers in leading
journals, even organizing
225
00:12:31,040 --> 00:12:34,920
workshops and social events at
leading AI conferences like
226
00:12:34,920 --> 00:12:37,280
ICLR, new ribs and those kinds
of things.
227
00:12:37,280 --> 00:12:40,440
So that really helped to, you
know, bring together like
228
00:12:40,440 --> 00:12:43,640
industry plus academia plus
public sector, like everyone
229
00:12:43,640 --> 00:12:47,360
could see that, OK, there is
some guy who is working on AI
230
00:12:47,360 --> 00:12:50,040
that connects to sustainability,
that connects to explain
231
00:12:50,040 --> 00:12:52,280
ability.
George it for some of our
232
00:12:52,280 --> 00:12:56,280
listeners who might be just
starting to understand to learn
233
00:12:56,280 --> 00:12:59,680
about what AI is, what does
black box AI mean?
234
00:12:59,920 --> 00:13:03,760
And also, how would you explain
what explainable AI is to
235
00:13:03,760 --> 00:13:06,080
someone who is completely new to
this area?
236
00:13:06,880 --> 00:13:12,680
In very simple terms, I would
say that everyone is using like
237
00:13:12,680 --> 00:13:15,800
ChatGPT right now.
Everyone knows of it as AI, but
238
00:13:15,800 --> 00:13:18,320
behind the scenes it's using
neural networks.
239
00:13:18,320 --> 00:13:21,600
And neural networks are very
complex, like machine learning
240
00:13:21,600 --> 00:13:25,120
models, actually deep learning
models which work very much like
241
00:13:25,120 --> 00:13:28,600
how your human brain functions.
So they are based on the
242
00:13:28,680 --> 00:13:32,080
operations of the human brains.
The analogy is like that, but
243
00:13:32,120 --> 00:13:35,920
they are kind of algorithms or
models which can make highly
244
00:13:35,920 --> 00:13:38,560
accurate decisions.
But as of today, it's
245
00:13:38,720 --> 00:13:41,640
practically infeasible to
understand the workings of how
246
00:13:41,640 --> 00:13:43,960
these large models work or
operate.
247
00:13:44,680 --> 00:13:46,360
That's like the black box behind
this.
248
00:13:46,360 --> 00:13:49,760
So you you can use ChatGPT, you
can type in a prompt and you can
249
00:13:49,760 --> 00:13:52,480
get a response.
You also can get the reasoning
250
00:13:52,480 --> 00:13:54,240
in the latest models that are
out there.
251
00:13:54,240 --> 00:13:56,760
You can use O3 and stuff like
that to get the reasoning.
252
00:13:57,120 --> 00:14:01,040
But that is something that the
model doesn't really understand.
253
00:14:01,120 --> 00:14:03,960
The text that goes into the
model, the data that goes into
254
00:14:03,960 --> 00:14:07,480
the model, the model only sees
it as a bunch of numbers.
255
00:14:07,480 --> 00:14:11,000
It is just tokens for the model
and it doesn't really understand
256
00:14:11,320 --> 00:14:14,080
the things in the same sense as
a human does.
257
00:14:14,080 --> 00:14:17,440
So that's like the black box.
And in the industry, for
258
00:14:17,440 --> 00:14:20,840
example, the wind industry, like
I was talking about in my PhD,
259
00:14:21,120 --> 00:14:24,400
you would have data from wind
turbine sensors which could lead
260
00:14:24,400 --> 00:14:28,120
to decisions on faults, right?
You could use that data to build
261
00:14:28,120 --> 00:14:31,720
predictive models that can tell
you that in the next 24 hours
262
00:14:31,720 --> 00:14:33,960
the gearbox of the wind turbine
is gonna fail.
263
00:14:34,360 --> 00:14:37,800
But what actually leads to that
message, like what is making the
264
00:14:37,800 --> 00:14:41,560
model think that in next 24
hours the gearbox would fail, is
265
00:14:41,560 --> 00:14:43,680
where explainable AI comes into
play.
266
00:14:44,200 --> 00:14:48,280
That is where techniques you
could use to decipher that these
267
00:14:48,280 --> 00:14:51,280
are the most important
parameters from my wind turbine
268
00:14:51,280 --> 00:14:53,560
sensors, which actually
contribute to the fault.
269
00:14:53,560 --> 00:14:56,960
And same goes for factories.
I can predict with an AI model
270
00:14:56,960 --> 00:15:00,520
that the product that I'm making
right now, it is going to have
271
00:15:00,520 --> 00:15:02,680
more waste produced in the next
year.
272
00:15:02,960 --> 00:15:05,280
But what actually leads to that
waste?
273
00:15:05,280 --> 00:15:08,960
What is the root causes of that
waste and how I can actually go
274
00:15:08,960 --> 00:15:13,120
ahead and make changes at the
ground level in my factory so
275
00:15:13,120 --> 00:15:17,600
that I am minimizing the waste
and maximizing the profits and
276
00:15:17,600 --> 00:15:20,080
revenue for the business?
So that's where explainability
277
00:15:20,080 --> 00:15:22,240
comes in.
That's where causality comes in.
278
00:15:22,240 --> 00:15:26,520
And you actually don't just make
predictions, but you can get
279
00:15:26,520 --> 00:15:28,960
actionable insights out of the
AI in the Gen.
280
00:15:28,960 --> 00:15:33,360
AI age like it's, it's quite
important to use these models to
281
00:15:33,360 --> 00:15:36,800
make sure we use these models in
the trustworthy and responsible
282
00:15:36,800 --> 00:15:41,360
manner.
In your opinion, George, it what
283
00:15:41,360 --> 00:15:46,240
are some of the current
misconceptions or myths about AI
284
00:15:46,240 --> 00:15:49,840
that really, really annoys you?
One of the biggest
285
00:15:49,880 --> 00:15:54,160
misconceptions is that Jenny I
is all AI.
286
00:15:54,760 --> 00:15:58,160
Jenny I is such a buzzword in
which billions of dollars of
287
00:15:58,160 --> 00:16:00,680
investments are being poured in
as AVC.
288
00:16:00,720 --> 00:16:04,400
You will see so many varieties
of Jenny I products that have
289
00:16:04,400 --> 00:16:07,520
been released into the market
and which makes a lot of the
290
00:16:07,520 --> 00:16:11,480
people especially at the senior
levels, right like the VPS and
291
00:16:11,480 --> 00:16:15,200
the C level leaders, they might
not know that Jenny I is not
292
00:16:15,200 --> 00:16:18,520
something that can accomplish
everything that we need to
293
00:16:18,520 --> 00:16:21,880
change in our business.
So the hype and the buzzword
294
00:16:21,880 --> 00:16:25,640
that has been created and on
Jenny I, it's like it has
295
00:16:25,640 --> 00:16:28,640
created the biggest
misconception that Gen.
296
00:16:28,640 --> 00:16:31,440
AI can solve every single
problem that we have in the
297
00:16:31,440 --> 00:16:34,200
business.
To be honest, in my experience,
298
00:16:34,520 --> 00:16:37,720
most of the problems that we
face in everyday life is
299
00:16:37,720 --> 00:16:39,720
something that we don't even
need Gen.
300
00:16:39,720 --> 00:16:43,040
AI for.
How the R&D operates, how the
301
00:16:43,040 --> 00:16:46,720
supply chain operates, how the
logistics operates, how the
302
00:16:46,720 --> 00:16:49,080
finance operates, different
business functions.
303
00:16:49,440 --> 00:16:53,160
They actually have either
structured or unstructured data
304
00:16:53,440 --> 00:16:56,640
which you can use to build
traditional machine learning
305
00:16:56,640 --> 00:17:00,080
models or even statistical
models and apply data science on
306
00:17:00,080 --> 00:17:02,920
top of it.
And then where Chennai could
307
00:17:02,920 --> 00:17:06,880
help the most is when you have
to build that human interface to
308
00:17:06,880 --> 00:17:10,000
any predictive model to create
like a conversational layer or
309
00:17:10,000 --> 00:17:12,960
to create like an interactive
question answering system.
310
00:17:12,960 --> 00:17:15,359
That's where it could be quite
powerful.
311
00:17:15,359 --> 00:17:19,079
But if you want to do something
trustworthy and you can't like
312
00:17:19,079 --> 00:17:22,839
really use an LLM to do
predictive maintenance in a
313
00:17:22,839 --> 00:17:24,839
factory, which would be very,
very risky.
314
00:17:24,839 --> 00:17:27,319
And you might just get more
false alarms and missed
315
00:17:27,319 --> 00:17:31,040
detections than actually saving
money by using AI.
316
00:17:32,320 --> 00:17:36,320
What is the area of focus for
your research or your work right
317
00:17:36,320 --> 00:17:40,520
now?
I'm quite focused in using AI
318
00:17:40,520 --> 00:17:43,920
for different real world use
cases in the connected lab
319
00:17:43,920 --> 00:17:47,680
space, like in the labs of the
future, how we can use AI to
320
00:17:47,680 --> 00:17:49,680
optimize the new product
development process.
321
00:17:49,680 --> 00:17:52,800
So whenever you are making new
products, then you are trying to
322
00:17:52,800 --> 00:17:55,920
ensure that the quality of the
products is the highest, it is
323
00:17:55,920 --> 00:17:59,880
safe, it is efficacious.
You can leverage a lot of data
324
00:17:59,880 --> 00:18:03,640
that comes from your historical
laboratories, the data that
325
00:18:03,640 --> 00:18:07,240
comes from formulators and
analysts to who make all these
326
00:18:07,240 --> 00:18:11,440
new products, which can be used
as a knowledge pool to really do
327
00:18:11,440 --> 00:18:14,720
something impactful and it could
help you create your simulate
328
00:18:14,720 --> 00:18:18,000
new scenarios, new products that
OK, I'm making a new product
329
00:18:18,000 --> 00:18:21,200
with these active ingredients.
What would be the impact if I
330
00:18:21,200 --> 00:18:25,240
change this, if I change my
container, if I change my sample
331
00:18:25,240 --> 00:18:27,920
from from a tablet to a capsule
and stuff like that.
332
00:18:27,920 --> 00:18:31,000
That is one of the areas.
Plus I am also quite heavily
333
00:18:31,400 --> 00:18:35,080
involved these days in grounding
AI models into domain specific
334
00:18:35,080 --> 00:18:36,840
knowledge.
Quite important in the Gen.
335
00:18:36,840 --> 00:18:40,400
AI age because we cannot really
use a foundational model like
336
00:18:40,440 --> 00:18:45,720
GPT 4 to understand the context
of our own business, of our own
337
00:18:45,720 --> 00:18:49,360
industrial specific use cases.
Grounding it on knowledge by
338
00:18:49,360 --> 00:18:52,640
using knowledge graph.
I am quite interested these days
339
00:18:52,640 --> 00:18:54,560
in creating my own knowledge
graphs.
340
00:18:54,560 --> 00:18:58,400
For example, for different use
cases, I can model a wind
341
00:18:58,400 --> 00:19:01,920
turbine as a knowledge graph, as
a system with different sub
342
00:19:01,920 --> 00:19:05,280
components with different alarms
and different connections
343
00:19:05,280 --> 00:19:08,880
between these alarms to the sub
components and things like that.
344
00:19:09,080 --> 00:19:12,840
Then I can connect my knowledge
graph to an LLM which will give
345
00:19:12,840 --> 00:19:16,480
that LLM the historical context
or the knowledge that okay, this
346
00:19:16,480 --> 00:19:20,440
is my domain specific knowledge
or the foundations which the LLM
347
00:19:20,440 --> 00:19:23,160
should refer to whenever it is
making any decisions.
348
00:19:23,160 --> 00:19:27,120
And that's where Agent TKI
again, a big buzzword comes into
349
00:19:27,120 --> 00:19:29,800
play.
What I am kind of doing right
350
00:19:29,800 --> 00:19:35,280
now is trying to connect it all
to additional ML plus knowledge
351
00:19:35,280 --> 00:19:39,080
graphs so that we build models
that are, you know, like safe
352
00:19:39,120 --> 00:19:41,240
and like I was telling,
explainable.
353
00:19:41,240 --> 00:19:45,160
So any, any decisions I get from
an LLM, I can actually attribute
354
00:19:45,160 --> 00:19:48,320
it to the responses I have in my
knowledge graph.
355
00:19:48,320 --> 00:19:51,960
So that I know that OK, there is
actual enterprise data that
356
00:19:51,960 --> 00:19:55,120
tells that this is the way to
resolve the fault in a wind
357
00:19:55,120 --> 00:19:59,280
turbine, or this is the way to
optimize the incorporation of
358
00:19:59,280 --> 00:20:01,800
vitamin C in a new tablet that
I'm making.
359
00:20:02,600 --> 00:20:06,400
I'm going to ask you to briefly
explain what some key terms are.
360
00:20:06,760 --> 00:20:09,480
So could you explain to our
listeners who might not be that
361
00:20:09,480 --> 00:20:12,440
familiar with AI, what
foundational models are?
362
00:20:12,920 --> 00:20:17,080
Could you also explain why is
necessary to ground AI
363
00:20:17,080 --> 00:20:18,800
frameworks?
And lastly, could you also
364
00:20:18,880 --> 00:20:23,240
explain what AI agents are?
Foundational models are
365
00:20:23,240 --> 00:20:27,240
basically your large models that
are trained with any almost
366
00:20:27,240 --> 00:20:29,320
anything and everything from the
Internet, right?
367
00:20:29,320 --> 00:20:33,360
So GPT that we all use.
It's a foundation model which is
368
00:20:33,360 --> 00:20:36,440
built on huge amounts of data
from across the Internet, which
369
00:20:36,440 --> 00:20:40,240
might even include Reddit, which
might include Wikipedia and
370
00:20:40,320 --> 00:20:42,720
stuff like that.
So it has got a lot of
371
00:20:42,720 --> 00:20:47,240
knowledge, but it is not
something that is specifically
372
00:20:47,240 --> 00:20:50,400
going to have your competitive
advantage because it doesn't
373
00:20:50,400 --> 00:20:52,680
know your data.
It doesn't know your problem
374
00:20:52,680 --> 00:20:54,320
statement.
So that's like a foundational
375
00:20:54,320 --> 00:20:57,080
model for you.
Why we need to ground these
376
00:20:57,080 --> 00:21:00,640
models in our own data?
It's because of hallucinations.
377
00:21:00,640 --> 00:21:05,240
So as as you know, ChatGPT we
all use a lot of the time, there
378
00:21:05,240 --> 00:21:08,440
is a huge possibility it will
hallucinate, it will make
379
00:21:08,440 --> 00:21:12,320
plausible real sounding
responses which might actually
380
00:21:12,320 --> 00:21:16,120
be completely inaccurate.
And in factual it is tolerable
381
00:21:16,120 --> 00:21:20,320
when we are using it to generate
images or do some fun stuff,
382
00:21:20,320 --> 00:21:23,040
experimental stuff.
But when we use it at the
383
00:21:23,040 --> 00:21:26,560
corporate level, at the
enterprise level, it comes with
384
00:21:26,560 --> 00:21:30,040
huge implications.
There is EUAIA and there is
385
00:21:30,080 --> 00:21:35,400
regulations that come in usage
of AI models like FDA and EMA
386
00:21:35,400 --> 00:21:38,000
and stuff like that.
There is like huge regulations
387
00:21:38,000 --> 00:21:41,000
that are coming up these days
and huge penalties for
388
00:21:41,240 --> 00:21:44,240
businesses if they do not adhere
to these regulations.
389
00:21:44,240 --> 00:21:46,920
So that's why it's quite
important to choose your own
390
00:21:46,920 --> 00:21:50,480
data to ground these models.
Maybe it would be a lot less
391
00:21:50,480 --> 00:21:52,600
powerful.
It cannot do everything in this
392
00:21:52,600 --> 00:21:54,880
world, but it can do something
very well.
393
00:21:54,880 --> 00:21:58,280
That is where the power of
grounding your model comes into
394
00:21:58,280 --> 00:22:01,640
play.
And agentic AI basically aims to
395
00:22:01,920 --> 00:22:05,520
make sure that LLMS are
something that can work
396
00:22:05,520 --> 00:22:08,360
autonomously.
So instead of us having to go in
397
00:22:08,360 --> 00:22:11,520
and type in a question every
time and get stuff accomplished
398
00:22:11,520 --> 00:22:15,040
every time with a prompt, LLMS
can work autonomously when they
399
00:22:15,040 --> 00:22:17,680
are agentic.
So they can actually approach
400
00:22:17,680 --> 00:22:20,520
your data, they can approach
your systems, they can actually
401
00:22:20,520 --> 00:22:24,400
work 24/7.
They can keep an eye on any
402
00:22:24,400 --> 00:22:27,720
tickets that might be raised in
your company, any issues that
403
00:22:27,720 --> 00:22:29,920
are going on and stuff like
that.
404
00:22:29,920 --> 00:22:33,440
So you could have an HR agent
that looks into HR tickets.
405
00:22:33,440 --> 00:22:37,560
You can have AAR and D agent
that is going to, you know, look
406
00:22:37,560 --> 00:22:41,000
at your R&D data from the labs
and it is going to keep an eye
407
00:22:41,040 --> 00:22:44,640
or audit your R&D data to make
sure that all the quality
408
00:22:44,640 --> 00:22:47,480
control, all the quality
assurance and test like that
409
00:22:47,480 --> 00:22:50,920
they are being done properly.
That's the power of agent like
410
00:22:50,920 --> 00:22:55,160
the autonomous nature and the
ability to work without human
411
00:22:55,160 --> 00:22:57,800
intervention or with minimum
human intervention.
412
00:22:57,800 --> 00:23:01,040
It's quite important to realize
that these things, if you leave
413
00:23:01,040 --> 00:23:04,120
it without any human
intervention at all, then there
414
00:23:04,120 --> 00:23:07,480
is a huge possibility that it
will lead to very, very, very
415
00:23:07,480 --> 00:23:11,240
costly failures or scenarios for
you.
416
00:23:11,240 --> 00:23:15,840
So it's quite important to make
sure that the agents, whenever
417
00:23:15,840 --> 00:23:18,960
you are using it for something
quite critical, for example, you
418
00:23:18,960 --> 00:23:22,080
are using it to give refunds to
a customer, then it's important
419
00:23:22,080 --> 00:23:26,440
that you keep an eye and make
sure that people cannot misuse
420
00:23:26,440 --> 00:23:30,720
your AI agent and make it do
things that you do not intend to
421
00:23:30,800 --> 00:23:34,600
have it do in front of them.
OK, understood.
422
00:23:34,880 --> 00:23:37,800
In a way, it sounds like
foundational models.
423
00:23:37,800 --> 00:23:41,280
For example, if we take ChatGPT,
that's probably something that a
424
00:23:41,280 --> 00:23:42,720
lot of people are quite familiar
with.
425
00:23:43,080 --> 00:23:46,240
It's a type of conversant AI
because it's a chat box, You
426
00:23:46,560 --> 00:23:48,240
message it and it messages you
back.
427
00:23:48,240 --> 00:23:51,040
It's been trained on a
foundational model, which means
428
00:23:51,040 --> 00:23:54,080
that it's been trained on
basically most things that's
429
00:23:54,080 --> 00:23:57,360
publicly available out there.
So it can be compared to a
430
00:23:57,360 --> 00:24:01,360
worker being trained in a huge
variety of different techniques,
431
00:24:01,720 --> 00:24:04,040
but not really specializing in
anything.
432
00:24:04,240 --> 00:24:07,680
So there's worker, it can do
work, but if you ask it to
433
00:24:07,760 --> 00:24:10,720
create a very specialized
product, it will really struggle
434
00:24:10,720 --> 00:24:14,480
because maybe the skills that it
needed to have hasn't been
435
00:24:14,480 --> 00:24:17,320
focused yet.
And this is where your work such
436
00:24:17,320 --> 00:24:20,280
as what you were doing in
grounding these models come in.
437
00:24:20,280 --> 00:24:24,560
So it's really actually teaching
this AI more specialized, more
438
00:24:24,560 --> 00:24:27,920
focused, more relevant skills
for the particular job that you
439
00:24:27,920 --> 00:24:30,360
want it to do.
And then finally, with the AI
440
00:24:30,360 --> 00:24:35,240
agents is actually turning AI
models or turning AIS into
441
00:24:35,240 --> 00:24:39,600
almost like individual workers,
but know what they need to do
442
00:24:39,600 --> 00:24:41,480
and they can just do it.
And you don't really need to
443
00:24:41,480 --> 00:24:45,560
have 24/7 oversight or to have
as much hand holding as
444
00:24:45,560 --> 00:24:48,080
possible.
Would you say that's like a fair
445
00:24:48,080 --> 00:24:50,640
comparison?
Yeah, yeah, definitely.
446
00:24:50,640 --> 00:24:54,640
So it's, it's basically kind of
ensuring that you go from like
447
00:24:54,640 --> 00:24:58,600
just models that work with a
developer or a data scientist,
448
00:24:58,600 --> 00:25:02,560
kind of always keeping an eye on
those two models that would be
449
00:25:02,560 --> 00:25:04,640
more accessible or democratized,
right.
450
00:25:04,640 --> 00:25:07,800
So I think AI agents is really
helping to democratize these
451
00:25:08,200 --> 00:25:11,800
models and bring it into use by
people who are not like orders
452
00:25:11,800 --> 00:25:14,480
traditionally or not from the
technical background.
453
00:25:15,200 --> 00:25:19,520
Outside of work, what do you
think one of the most exciting
454
00:25:19,560 --> 00:25:24,200
potential applications of AI
tools into humans day-to-day
455
00:25:24,200 --> 00:25:28,280
lives could be?
AI should be reusable in
456
00:25:28,280 --> 00:25:31,880
scenarios where it couldn't
really reduce the amount of
457
00:25:32,000 --> 00:25:33,720
things that we humans have to
do.
458
00:25:33,720 --> 00:25:35,800
Not really doing everything for
us.
459
00:25:35,800 --> 00:25:39,480
Because I feel that honestly, if
AI is doing everything for us,
460
00:25:39,480 --> 00:25:43,000
from like writing an e-mail to
reading an e-mail to writing a
461
00:25:43,000 --> 00:25:46,280
presentation to even reading a
presentation, then no one will
462
00:25:46,280 --> 00:25:48,920
do anything.
That's not the future we as
463
00:25:48,920 --> 00:25:51,880
humans would want or for our
future generations.
464
00:25:51,880 --> 00:25:55,920
But what we really want is
definitely cases where AI is
465
00:25:56,040 --> 00:26:00,480
really, really great at to help
help people who might be having
466
00:26:00,480 --> 00:26:04,320
diseases and performing
diagnosis and supporting
467
00:26:04,320 --> 00:26:06,960
doctors, the medical specialist
and reaching out quick
468
00:26:06,960 --> 00:26:10,080
decisions, quicker turn around
times in terms of you know, like
469
00:26:10,120 --> 00:26:13,640
giving the patients their future
prognosis and diagnosis and
470
00:26:13,640 --> 00:26:17,560
stuff like that.
It is really adept at drilling
471
00:26:17,560 --> 00:26:20,240
down into the data.
There is so much data around the
472
00:26:20,240 --> 00:26:23,440
world which we have from our
daily life as well, which could
473
00:26:23,440 --> 00:26:25,960
be used to create so many
different useful things.
474
00:26:26,440 --> 00:26:29,720
Especially from the multi model
modality bit like it can help
475
00:26:29,720 --> 00:26:33,000
them to understand things that
are going on around the world.
476
00:26:33,040 --> 00:26:36,280
If some person is blind, then
they can actually use the AI to
477
00:26:36,360 --> 00:26:39,240
interpret what is happening in
their surroundings and it would
478
00:26:39,240 --> 00:26:41,480
be a completely game changing
thing.
479
00:26:41,480 --> 00:26:45,400
It would be a new life for them.
That is where AI could be very
480
00:26:45,400 --> 00:26:48,280
helpful.
Things like Excel and you're
481
00:26:48,320 --> 00:26:52,720
building a spreadsheet and stuff
like that and you kind of have
482
00:26:52,720 --> 00:26:56,800
to do lots of calculations and
create lots of formulas and
483
00:26:56,800 --> 00:26:59,440
things like that in which AI
could really help you.
484
00:26:59,520 --> 00:27:03,880
It could assist you to create
this formulas in a shorter time.
485
00:27:03,920 --> 00:27:07,280
It could help you to actually
focus on the analytics rather
486
00:27:07,280 --> 00:27:09,440
than the developments.
It would help you through
487
00:27:09,440 --> 00:27:12,200
decipher the data.
You can perform deep research on
488
00:27:12,200 --> 00:27:16,080
the data set and use it to
extract meaningful insights for
489
00:27:16,080 --> 00:27:18,840
your company.
How do you think that someone
490
00:27:18,840 --> 00:27:23,760
who is socially anxious might be
able to use AI tools to help
491
00:27:23,760 --> 00:27:25,800
them better go about their daily
life?
492
00:27:26,520 --> 00:27:29,880
A great example would be things
like the advanced voice mode
493
00:27:29,880 --> 00:27:31,760
that you have got in ChatGPT
these days.
494
00:27:32,600 --> 00:27:35,680
You can literally ask it, right?
To speak to me like a teacher,
495
00:27:35,720 --> 00:27:39,320
speak to me like a boss in my
company, speak to me like
496
00:27:39,640 --> 00:27:42,560
stranger and stuff like that.
So you can actually use it to
497
00:27:42,560 --> 00:27:47,720
simulate a person, a human who
you have never met before, for
498
00:27:47,720 --> 00:27:51,560
an introvert, for a shy person,
that is perfect for knowing
499
00:27:51,560 --> 00:27:54,960
beforehand what might it be like
to interact with such a person?
500
00:27:54,960 --> 00:27:57,400
What might be it like to
interact with the CEO of a
501
00:27:57,400 --> 00:27:59,120
company?
What might it be like to
502
00:27:59,120 --> 00:28:01,920
interact with a potential
investor and stuff like that.
503
00:28:01,920 --> 00:28:04,760
So it's really be useful,
especially I think for young
504
00:28:04,760 --> 00:28:07,920
professionals, for young people
who haven't seen life in and
505
00:28:07,920 --> 00:28:10,120
out.
We, we are very much at the
506
00:28:10,120 --> 00:28:12,440
beginning of our career.
We are very much at the
507
00:28:12,440 --> 00:28:14,760
beginning of our journey.
We haven't seen like the
508
00:28:14,760 --> 00:28:17,920
struggles that come later on in
life and that we learn from
509
00:28:17,920 --> 00:28:20,960
experiences.
So I think with the AI, we can
510
00:28:20,960 --> 00:28:24,480
actually understand that.
OK, like how is it really like
511
00:28:24,480 --> 00:28:30,440
to be in the shoes of ACEO?
How is it really life to like be
512
00:28:30,440 --> 00:28:34,480
in the shoes of maybe someone
who is kind of, you know, like
513
00:28:34,480 --> 00:28:37,520
older and you have like children
and how do you interact with
514
00:28:37,520 --> 00:28:38,800
your children and stuff like
that?
515
00:28:38,800 --> 00:28:42,160
So how how do I experience that
without even having reached that
516
00:28:42,160 --> 00:28:44,880
time or that age?
So I think that is where, yeah,
517
00:28:44,880 --> 00:28:48,040
I could really help the
introvert people to become like
518
00:28:48,040 --> 00:28:51,400
more extrovert.
And from my experience, as you
519
00:28:51,400 --> 00:28:55,600
start to speak to people, right,
you kind of start to become more
520
00:28:55,600 --> 00:28:57,400
outgoing.
There is no way it cannot
521
00:28:57,400 --> 00:29:00,160
happen.
Speak every day to the advanced
522
00:29:00,160 --> 00:29:02,400
voice mode, just for 15 minutes
every day.
523
00:29:02,400 --> 00:29:06,160
And I think literally in one
month you will be much more
524
00:29:06,240 --> 00:29:07,840
extrovert and much more
outgoing.
525
00:29:09,320 --> 00:29:12,560
That's a very creative way to
use the existing tools.
526
00:29:12,840 --> 00:29:15,360
And as you were speaking, I was
actually thinking, yes, that is
527
00:29:15,360 --> 00:29:18,200
so right.
It's a way for us, I guess, to
528
00:29:18,200 --> 00:29:23,720
also be able to more deeply
emphasize with people whose life
529
00:29:23,720 --> 00:29:26,520
experiences might be very
different to our own.
530
00:29:26,880 --> 00:29:30,440
And I think that's definitely a
great way to garner more
531
00:29:30,440 --> 00:29:34,000
understanding and more patience,
especially when we have to, you
532
00:29:34,000 --> 00:29:37,400
know, for example, work in a
very multicultural group or
533
00:29:37,400 --> 00:29:40,760
perhaps work with a group of
people who are just naturally,
534
00:29:40,760 --> 00:29:43,520
by default and by their own
experience, very, very different
535
00:29:43,520 --> 00:29:46,240
from us.
What would he say to young
536
00:29:46,240 --> 00:29:50,440
people who might feel like they
don't fit the mold, whatever
537
00:29:50,440 --> 00:29:55,240
that may be?
Believe in yourself, trust your,
538
00:29:56,120 --> 00:29:59,720
trust your skills, trust your
ability to change the world.
539
00:29:59,720 --> 00:30:03,520
Like literally every one person,
every single person in this
540
00:30:03,600 --> 00:30:06,600
planet, they have some unique
skills that we are born with
541
00:30:06,600 --> 00:30:09,600
which you can use to change the
world.
542
00:30:09,600 --> 00:30:12,440
Like every single thing that you
do in your everyday life that
543
00:30:12,440 --> 00:30:16,560
can actually help you to create
an impact that goes bigger that
544
00:30:16,680 --> 00:30:20,040
that can actually speak, speak
more than words, you know, like
545
00:30:20,040 --> 00:30:23,520
actions speak louder than words.
So you can actually write your
546
00:30:23,520 --> 00:30:26,440
own story every single day.
So that is what I would
547
00:30:26,440 --> 00:30:29,880
encourage young people that you
might be introvert, you might
548
00:30:29,880 --> 00:30:33,760
not fit the expectations of the
society or every single person
549
00:30:33,760 --> 00:30:36,680
in the society.
It doesn't really mean that is a
550
00:30:36,680 --> 00:30:39,400
weakness in you.
It is something that could be
551
00:30:39,440 --> 00:30:42,160
the positive quality, a positive
trait in you.
552
00:30:42,360 --> 00:30:45,400
You just have to leverage that.
Whatever is your weakness,
553
00:30:45,400 --> 00:30:49,400
identify how you can turn that
into your biggest superpower.
554
00:30:49,400 --> 00:30:54,400
Fittest introversion maybe
generally, maybe you are a good
555
00:30:54,400 --> 00:30:56,560
writer.
Maybe you you might be a good,
556
00:30:56,880 --> 00:30:59,160
good thinker.
So how you can become a
557
00:30:59,160 --> 00:31:01,760
scientist, how you can
potentially win the Nobel Prize,
558
00:31:01,760 --> 00:31:03,080
right?
Like have big dreams.
559
00:31:03,080 --> 00:31:06,960
I think like I never I never
imagined like even for one day
560
00:31:06,960 --> 00:31:10,360
in my life that I will be in
Forbes under 30, You know, so it
561
00:31:10,360 --> 00:31:14,920
was like have big, big dreams
and kind of never give up in
562
00:31:14,920 --> 00:31:17,440
your life.
That's the biggest encouragement
563
00:31:17,440 --> 00:31:20,680
to young people.
Who knows what you might become
564
00:31:20,680 --> 00:31:23,200
one day, right?
So just just keep come on going
565
00:31:23,200 --> 00:31:25,560
and let your work speak for
itself.
566
00:31:25,560 --> 00:31:28,200
And it will reach the right
people at the right time.
567
00:31:28,200 --> 00:31:32,800
And then that's where you can
speak up and you can share your
568
00:31:32,800 --> 00:31:35,600
views with the right people.
And you might be speaking to
569
00:31:36,160 --> 00:31:39,480
some of the biggest people,
biggest leaders in the planet,
570
00:31:40,040 --> 00:31:43,760
the best leaders in the world.
They are the ones who give space
571
00:31:43,760 --> 00:31:47,080
to their subordinates, right?
So they let their subordinates
572
00:31:47,080 --> 00:31:50,400
or their colleagues speak first.
And they are very humble.
573
00:31:50,400 --> 00:31:53,600
They are very grounded.
I think this is like a very,
574
00:31:53,720 --> 00:31:57,640
very much a unique trait which
could be turning you into a
575
00:31:57,640 --> 00:32:02,640
leader as you grow in your life.
For yourself, what kind of
576
00:32:02,640 --> 00:32:05,560
legacy do you want to be able to
leave behind?
577
00:32:06,800 --> 00:32:10,600
What I want to demonstrate to
young people is that you can
578
00:32:10,600 --> 00:32:14,920
actually shine in the big world
that we have around us by
579
00:32:14,920 --> 00:32:19,480
actually staying as you are.
What I try to do is kind of jump
580
00:32:19,480 --> 00:32:23,040
in at the right moment where I
feel it will help the company
581
00:32:23,040 --> 00:32:25,440
the most.
It will help maybe the society
582
00:32:25,440 --> 00:32:28,720
the most for other people to see
that.
583
00:32:28,720 --> 00:32:32,240
OK, like coming from a shy
child, you can actually reach
584
00:32:32,240 --> 00:32:35,240
this position in life.
You can actually be like Forbes
585
00:32:35,240 --> 00:32:37,040
under 30.
That's just the beginning.
586
00:32:37,040 --> 00:32:39,280
So there, there's so much more
that could be done.
587
00:32:39,280 --> 00:32:42,320
If I can become ACEO or
something someday of a big
588
00:32:42,320 --> 00:32:45,800
company, like I think that would
be something I would love to
589
00:32:46,040 --> 00:32:48,480
accomplish.
But yeah, let's see where life
590
00:32:48,480 --> 00:32:50,440
takes us, right?
Wonderful.
591
00:32:50,440 --> 00:32:53,680
Well, I will have to re
interview you in five years time
592
00:32:54,280 --> 00:32:56,520
and see just how much further
you have gone.
593
00:32:56,920 --> 00:32:59,120
Thank you so much for sharing
your thoughts George.
594
00:32:59,120 --> 00:33:03,480
Let me finish by asking you our
podcast staple, which is what is
595
00:33:03,480 --> 00:33:06,800
one thing that you think will
allow more people to have better
596
00:33:06,800 --> 00:33:10,000
mental health?
Because we talked a lot about AI
597
00:33:10,000 --> 00:33:13,880
today, AI, I feel it could be
useful.
598
00:33:13,880 --> 00:33:16,800
It could be harmful as well when
it comes to mental health.
599
00:33:16,800 --> 00:33:21,520
So if you are just completely
using AI for every single thing
600
00:33:21,520 --> 00:33:24,400
in your life and you are losing
human interaction, that's not
601
00:33:24,400 --> 00:33:25,960
good.
It could lead you to depression.
602
00:33:25,960 --> 00:33:29,960
It could lead you to, you know,
like just be doing everything
603
00:33:29,960 --> 00:33:32,800
with AI.
But where AI could be helpful is
604
00:33:34,080 --> 00:33:36,000
it could help with your mental
health as well.
605
00:33:36,000 --> 00:33:39,880
So for example, imagine a lot of
people, especially the older
606
00:33:39,880 --> 00:33:43,440
people, they might leave alone
in today's life, there is not
607
00:33:43,440 --> 00:33:45,720
much social interaction between
everyone.
608
00:33:45,720 --> 00:33:49,320
Like everyone is busy in their
everyday life and they don't get
609
00:33:49,800 --> 00:33:52,120
get to spend that much time with
their friends and family.
610
00:33:52,120 --> 00:33:54,000
So that's where AI could be very
helpful.
611
00:33:54,000 --> 00:33:57,000
You have the companion at the
fly of a button.
612
00:33:57,000 --> 00:34:00,600
You just click on that button
and then you have an AI that can
613
00:34:00,600 --> 00:34:05,160
speak to you and it can just be
your best friend.
614
00:34:05,160 --> 00:34:09,159
It can be your philosopher.
It can help you to act like your
615
00:34:09,159 --> 00:34:12,320
grandchildren, for example, and
it can give you that happiness
616
00:34:12,320 --> 00:34:15,480
which you might not be
experiencing right now because
617
00:34:15,480 --> 00:34:18,360
you are away from your friend,
friends and family.
618
00:34:19,040 --> 00:34:23,239
In a nutshell, use AI in the
ways which gives you happiness
619
00:34:23,239 --> 00:34:26,560
rather than in the ways which
makes life monotonous for you.
620
00:34:26,960 --> 00:34:31,760
Stay confident, stay grounded
and humble and keep on working
621
00:34:31,760 --> 00:34:34,520
on life like keep keep on doing
things that you'll have to do.
622
00:34:34,520 --> 00:34:37,159
If you like, like to write
papers, write that.
623
00:34:37,159 --> 00:34:38,880
If you like to write books to do
that.
624
00:34:38,880 --> 00:34:43,159
If you are good in sports, then
do so, but do something that can
625
00:34:43,159 --> 00:34:45,880
kind of will give you a
competitive advantage and that
626
00:34:45,880 --> 00:34:49,679
will set you apart in society
and be 50 years from now, people
627
00:34:49,679 --> 00:34:51,719
will remember your name, you
know, like that.
628
00:34:51,719 --> 00:34:55,120
OK, This person was the
contributor for so and so thing
629
00:34:55,120 --> 00:34:58,760
in the society and he made this
change in the society which
630
00:34:59,080 --> 00:35:02,200
which the world would always
remember and your name would be
631
00:35:02,200 --> 00:35:06,560
carved in history.
What a wonderful way to end our
632
00:35:06,560 --> 00:35:10,480
conversation, George It thank
you so much for taking the time
633
00:35:10,480 --> 00:35:14,640
to share your personal journey,
the comments that you received
634
00:35:14,640 --> 00:35:18,280
as a child, which actually
inspired you to keep working and
635
00:35:18,280 --> 00:35:21,480
stay true to who you are.
And so thank you for educating
636
00:35:21,480 --> 00:35:24,200
us about some of the key terms
within AI.
637
00:35:24,480 --> 00:35:27,200
It was a pleasure to have you on
the podcast.
638
00:35:27,880 --> 00:35:29,880
Thanks a lot.
Yeah, nice speaking to you as
639
00:35:29,880 --> 00:35:30,440
well.
Cheers.
640
00:35:30,480 --> 00:35:33,840
Thank you.
That's a wrap for today's
641
00:35:33,840 --> 00:35:35,920
episode of the Low to Grow
podcast.
642
00:35:36,520 --> 00:35:40,280
If it resonated with you, leave
a review and hit follow to help
643
00:35:40,280 --> 00:35:42,800
more people to find important
conversations.
644
00:35:43,400 --> 00:35:45,400
Keep growing and see you next
time.