· 34 min read

Humayun Sheikh, CEO of Fetch.ai, on AI, Machine Learning, Autonomous Agents, and Blockchains | Ep. 243

In an exclusive interview with cryptonews.com, Humayun Sheikh, Founder and CEO of Fetch.ai, talks about building the infrastructure required for autonomous software agent, connecting smart agents to automate web3 systems, and the intersection of AI and Crypto.

About Humayun Sheikh

Humayun Sheikh is the Founder and CEO of Fetch.ai. He is an entrepreneur, investor, and tech visionary passionate about AI, machine learning, autonomous agents, and blockchains. In the past, he was a founding investor in DeepMind where he supported commercialisation for early-stage AI & deep neural network technology. Currently, he is leading Fetch.ai as a CEO and co-founder, a start-up building the autonomy of the future. He is an expert on the topics of artificial intelligence, machine learning, autonomous agents, and the intersection of blockchain and commodities.

Humayun Sheikh gave a wide-ranging exclusive interview which you can see below, and we are happy for you to use it for publication provided there is a credit to www.cryptonews.com. 

Highlights Of The Interview

  • Building the infrastructure required for autonomous software agents
  • Connecting smart agents to automate web3 systems
  • The intersection of AI and Crypto
  • Agent-based trading tools for decentralized exchanges (DEXs)
  • Development of advanced blockchain technologies to overcome the security and scalability challenges of existing implementations

 

 

 

Full Transcript Of The Interview

Matt Zahab 
Ladies and gentlemen, welcome back to the Cryptonews Podcast here. Buzzing as always, coming in hot. Our guest today from the one and only London, England. What a treat. One of the best cities in the world. Today we have Humayan Sheikh, the Founder and CEO of Fetch.ai. He is an entrepreneur, investor, and tech visionary who's passionate about technologies such as AI, machine learning, autonomous agents, and blockchain. In the past, he was a founding investor in DeepMind, where he supported commercialization for early stage AI and deep neural network technology. Currently, he is leading Fetch.ai as CEO and Co-Founder, a startup building the autonomy of the future. Humayan is also an expert on the topics of artificial intelligence, machine learning, autonomous agents, and everything blockchain and commodities related. Been a while pumped to have you on. Welcome to show my friend how you doing?

Humayan Sheikh 
It's a pleasure. It's great to speak to you guys. I'm really quite excited. Exciting times for AI. 

Matt Zahab 
It's crazy you and the team started building this well in advance. I think a good place to start would be before we get into sort of your journey into the crypto space, I'd love to hear any stories related to the early days of DeepMind. And I know you were a founding investor in DeepMind. DeepMind is obviously one of the most important AI and neural network technology companies in our beautiful world. And I just love if you could share anything interesting or crazy or funny or anything of the like with us from the days at DeepMind. 

Humayan Sheikh 
It's been a while now, I think probably 12 13 years. And when I remember when we started it, we were sitting in a restaurant in Cambridge, and Dennis and I, obviously Demis is a friend who kind of was always working on games, and I remember exactly the conversation we were having is that, yeah, we should start a company which can actually build AGI, which was Dennis's passion. And I wrote the first check there and then that was the first check which went in the company, which was great. I think the more interesting bit about DeepMind was that the biggest problem we were having, which I think is also quite obvious these days, is the compute, the training, the amount of money it was costing to compute and train these models. That was one of the biggest problems then. It still is the biggest problem now. And the second point I want to kind of think about and raise is how do you monetize these models? Right? You can only monetize it for big corporations if they come in either buy you out, which is what happened with Deepline, because that was the best option. I mean, Google had greatest compute in the world, it could take it. It has a lot of data on people. It has a lot of data generally. So where the applications are is with big corporations. That was very clear. And when I came out of it, that was one of the passions which I have is how do you take all these machine learning models, which is going to be the gold. It's not going to be just data, it's going to be these machine learning models. There will be a machine learning model for everything, right? So how do you take that and how do you commercialize it so that everybody can have a participation in it? And that was kind of the start of the Fetch journey and that's how we kind of got into starting Fetch and yeah, not look back since that. 

Matt Zahab 
So Humayun, give me the Inception story. Like how did you get started in AI? Because obviously it's the biggest topic present day it's it's the crypto of 2020 and 2021 and early 2022, where it was the creme de la creme, the talk of the town. Everyone wanted to get into it. It seemed like the biggest and brightest thing. Present day this is what AI is. You obviously got in years and years ago. You said DeepMind was sort of 13 years ago, which is bananas. What was it about AI or who introduced you or what was the catalyst? What was that thing or person that got you into the field of AI? 

Humayan Sheikh 
Where I started from was computer science. So we were always in kind of AI because you start from algorithms and you start building. Then there was a very obvious transition to building machine learning models. And I was in the gaming industry where Dennis and I had kind of been involved in a game which was about kind of building these agents playing game against each other. So it's the gaming industry which kind of started this for me perhaps. And it became quite clear that compute the infrastructure wasn't right at that point and the machine learning models were not well trained partly because the data wasn't available. But once you get over that hurdle then you could see this kind of was the future of almost everything. So AI, we all have been using AI in different parts of our lives for very long time, so it's not new. I think what's changed now is really just this focus which OpenAI brought in terms of the large language models and because the training is becoming cheaper and cheaper and more affordable and the actual algorithms are getting better. If you look at these large language models, I mean we started in 1970s or something. So it's not machine learning models are new, not new, predictions are not new. We have used them, bigger companies have used them, older aggregators, I mean recommendation systems including Google's Ad system, they all have used machine learning and AI, I mean Siri, Apple phones, any of these things use. Machine learning and AI So it's not new. I think the focus is that now everything else has moved on. The compute is available and possible to train models and you could see that this whole connection there is going to be a paradigm shift. That's what's going to come next. So, just to kind of reiterate, nothing is this is not completely new. This is just an evolution at this stage. 

Matt Zahab 
Yeah, it's finally getting trendy now. Very interesting. One of the other really big topics, AI related is the race to grab chips. Obviously, you have Nvidia, that is, their stock is, what, 100X in the last year or two, whatever the case may be. But what's going on with the race for chips? And to my knowledge, obviously it's great, and I'm Canadian, I'm not American, but on our edge of the pond, if you will. It's great that the USA does have a Behemoth in a manufacturing industry, that is becoming extremely and ever more important. But what's the end game here with chips? Is it going to be similar to a chip for a computer or a phone where years ago they were hundreds of thousands of dollars in the size of a house and now they're a couple of millimeters and very cheap? Is it going to follow in the same trajectory? What's the deal there? 

Humayan Sheikh 
Yeah, why is it becoming so important? One of the reasons why it's becoming so important is obviously these models are big, you need to train them. Some of them are general models, which are even bigger, but then you have to train the specific model. So every industry, every player who needs to do something in this space needs to be training these models. Now, the couple of things which I see happening is the models are going to get better. So the data, the training required is going to start reducing and that's already happening. I've seen plenty of DeepMind work on that, so you wouldn't have to train it that intensely. I mean, you could have less parameters and it will be better predictions in terms of what it predicts, what it says. So that's going to happen on one side, the other side as you mentioned, Nvidia and the chips need to be there. Every industry needs to have some ability, some compute to train. So you're going to see a lot more research into making these GPUs, the chips which are going to do maybe edge computing, edge training of models. So at the moment that's where the trend is going to go. So we will see huge improvement in the chip technology but there is limitations right now and we don't know how far we can go. Maybe quantum mechanics kind of will come in at some point and things will change dramatically if it happens. But we're not there yet. So initially we're just going to keep on following Ora's Law and I think we kind of broke that long time ago so hopefully that will have to happen. But one thing that definitely needs to also happen is needs to become more accessible. The governments, the regulators, the industry, people need to focus on getting the cost down. So small businesses, small to medium enterprise can also play a part in it. Otherwise it kind of sits in this huge big corporation control. As you've seen from all the fundraisers that we've been seeing, 100 million, 200 million, majority of that funding is actually going on compute. So the infrastructure of compute is what is getting commoditized but it's not there yet. 

Matt Zahab 
Interesting. Yeah, it's crazy. These multi hundred million dollar raises and a big chunk of it is literally going to chips. It's absolutely bananas. Let's jump into a topic that interests the absolute hell out of me and that is autonomous software agents. Sort of started my AI journey last year when everything sort of popped off. Just jumping on the trend, tried to become, you know a 10X prompt engineer, if you will. Obviously, I'm nowhere close to that, but stealing the classic 10X engineer from coding and developing, and now there are 10X prompters people who just really know how to jig and jag and game the obviously ChatGPT to get exactly what you want out of it, be more efficient and get more. Been screwing around with stable diffusion and all of the mid journey Dall-E and all the incredible image generators, a bunch of different AI tools as well. But the thing that really intrigues me is autonomous software agents. I saw a bunch of YouTube videos earlier in 2023 about how you can create an autonomous software agent, which then literally keeps doing those steps for you. I still don't fully understand this Humayun. I'd love if you could take a nice little deep dive, give us the TLDR, maybe just high level, don't go too crazy, nitty gritty, try to explain it to as if I was a five year old, but if you could jump into that, that'd be a treat. 

Humayan Sheikh 
Yeah. So if I take you through kind of the journey here, we have multiple things happening, right? So one of the things is automation. So you can just write a script. You can write an algorithm, which kind of evaluates certain parameters and then does certain tasks. So that's easy to code in software if you know exactly what the measurements need to be, if you know exactly where the data is coming from, when it becomes a little bit more autonomous, whereby you don't know what the data looks like. You don't know what the action looks like. You have to predict all of those things before you actually carry out the action itself. Then that's kind of becoming more intelligent, right? So when we say, okay, if this happens, do this, if this, then that kind of statement, that's not really anything. It's a very simple step, but just so that we kind of frame it in the right way. So robotic process automation, which is called RPA, has been happening for the last five, seven years. Call center is a very good example of that, where in a call center the process automation companies actually enable you to make it simpler. So where you're managing these huge call centers, the data is being managed by these robotic agents. So that's been around for quite a while. But these are very specific things which listen into your conversation and take the trigger or easily kind of provides you information to attend the call center. And that then kind of started expanding into other complex processes where it's a menial task but somebody still has to do it and you have to teach somebody to do it. So you could automate that. That's been happening in parallel. But what happened with the large language models is that we started kind of, I say understanding. It's not really true understanding yet, but you start actually breaking down the objective, what you give to this language model and you actually start seeing what a bit more intelligent response to that would be. That's also a progression of natural language processing. And combined with that now if you start introducing this ability to kind of build a context and then break it up into smaller chunks to understand what the objective actually entails. Now that's the interesting part and that's where the OpenAI ChatGPT kind of led us into. Now what's quite obvious is that once you have, for example, you give the objective into this large language model, what comes out of that is mostly text. So a software agent takes that text and actually converts that into actions. These actions can then be carried out intelligently and autonomously so humans don't have to carry out those tasks. For example, if I say I am going to go for lunch somewhere. You and I, we both understand that by lunch we mean a certain time. By lunch we mean going out for lunch means there is some kind of restaurant, or we're going to go to a food place, we're going to go out to lunch with somebody and there will be a particular time. So all these contextual information needs to be first picked up by this software agent. It then needs to be composed in a way that it's broken down into tasks. And those tasks then actually ultimately need to be executed. At this point in time, it's us. So if we say, you need to book a taxi, you need to get to this location to book a taxi there, you have to go and say, Uber, I need a taxi to this place, right? Then you say, oh, there is a restaurant I need to book the restaurant. So I go to OpenTable and I go and say, okay, I'm going to book this restaurant. So those are the processes which are conversions of this objective that you have. Now, if you take that and then you give it to this software agent and that's to answer your question, what kind of things does an agent do? Is it automates that process? But at the moment, you can't hard code every agent for every kind of objective because the objective is so varied, you can't cover everything. But what this combination of LLMs combined with these software agents, combined with context, what it enables you to do is to start doing these tasks with less onus on the Doer and more onus on the agents. So you can just say, this is my task. You put your phone down, and just like your private secretary or somebody who can actually do all these tasks for you, knows you so knows what kind of food you like, knows where you like to go, just books it and confirms it to you. So that's the interesting bit. Now, if you start thinking where we are going with this. So hopefully that answers your kind of, in a roundabout way, answers your question. 

Matt Zahab 
That was great. No, that was truly great. How far out are we from having full on autonomous agents that can do exactly what you described? Matt and Humayan want to have a nice dinner in London at XYZ restaurant and the autonomous agent, I literally just be like, hey, book me a restaurant at blah, blah, blah. It does it all. How far out we till that happens? 

Humayan Sheikh 
We're not very far, because that's doable right now on Fetch. So our platform enables you to integrate things to it whereby you can carry out all that execution. And that's right now, when I say right now, it's technically possible. What is not possible yet is that this new evolution of marketplace has not happened. So not everybody understands that this whole search and discovery is going to completely change. So at the moment, if you have to search for a restaurant, you have to go somewhere, you have to say, I want a restaurant, and you find one. Then you see if the timing is available, you are doing that work. But what needs to happen is that this all kind of has to be turned on its head, whereby your agent does the work. So all these restaurants will need to have some presence in this new kind of ecosystem, this new ecommerce world, and that's what's not happened yet. So the technology is very much available. I mean, we know that because we're building that technology and we'll be showcasing end to end demos very soon, where it takes an LLM or multiple LLMs, chooses the right LLMs, and then it executes multiple tasks and it informs the user or asks for choices and executes the task and lets you know all of those things are very possible at the moment. So we're not far at all now. Of course this will keep improving, but what this means is a complete paradigm shift. Really what's going to happen is this online advertising model in certain cases going to change. I'm not saying that the textual searches might be more interesting as well, but this is where the true revenue models are going to be generated. 

Matt Zahab 
So cool. Humayun we got to take a quick break and give a huge shout to our sponsor of the show PrimeXBT. And when we get back, we are going to keep diving into Fetch.ai, the intersection of Crypto and AI and some more Fetch related stuff like micro agents and the agent verse. But before then, huge shout out to our sponsor of the show that is PrimeXBT, longtime friends of cryptonews.com and generally just incredible people, the whole team is. PrimeXBT offers a robust trading system for both beginners and professional traders. Doesn't matter if you're a rookie or a vet, you can easily design and customize your layouts and widgets to best fit your trading style. PrimeXBT is running an exclusive promotion for listeners of the Cryptonews Podcast. The promo code is CRYPTONEWS50 and that will give you 50% of your first deposit credited to your account that can be used as additional collateral to open positions. Again, the promo code is CRYPTONEWS50. All one word that is CRYPTONEWS50  to receive 50% of your deposit credited to your trading account. Now back to the show with Humayun. Let's jump right into agent verse, which is sort of the sandbox for experimentation, learning how to use the browser, discovering other agent. It's really sort of the HQ of anything Agent related. I'd love if you could take a deep dive into that and give a couple of use cases for people who perhaps are not as technologically advanced, like myself, that could hopefully participate in the agent verse. 

Humayan Sheikh 
Yeah, so just to kind of dig into that. Let me set the scene first. So we talked about LLMs, right? Let's say you somehow input your objective. You write a piece of text which says, I want to travel to London. I'm going to travel with my family and these are my dates. Or not even say that, but I want you to arrange my travel right now. If you go to ChatGPT and you say do this, it will go and give you in exact detail what you should do and how you should do it. But it stops there. You read it and you think, okay, that's great, but what do I do now? So now you go and do the work yourself. So now you take it to the next stage where you input this objective. It kind of tells you all the tasks that you need to do. Then it starts connecting you to, I mean, which which is the functions bit which has come in. You can start connecting to the right kind of functions and actually start executing something. Now you then have to go to that platform to execute something. So for example, the function might connect you to expedia.com API, right? And then you can kind of go in there and you kind of try and do some very inefficient booking system, but it's quite limited. So now let's go back and think about this. So now you go in and you do exactly the same. You put in the objective. But this objective can be delivered to OpenAI LLM. It can be delivered to some other LLMs, multiple LLMs. And out of which comes the fact that your objective is related to, let's say, travel. First thing you understand is that it's a travel and as a user, you're not aware of it. This is happening in the background. So what comes out of it is there is travel involved. You've kind of gone into that travel bucket. You then go and look for other specialist LLMs which have trained themselves to be more travel specific. You find one, but how do you find one is you have to have this search and discovery mechanism or a database of these LLMs, which you then connect to their agents to query them. So agent verse is the starting point of the Google of agents, for example. So that's probably the easiest way of saying it. It's a search engine which connects you when you need it to anything an agent can connect to. So in the first instance you'll be connecting to a specific LLM, which is travel based LLM, and you send the query into that LLM. So out comes again all the tasks. So let's say the tasks are book a hotel from this date to this date for two people, for a business kind of trip or a pleasure trip, do this, book a flight, book a transit from airport to the hotel. So all these things are now broken down. So now what you do is you say, okay, my first task is I want to book a flight. So you go into this marketplace, which is the agent verse, and you say, hey, I need to book a two tickets business class from Toronto to London. Who is there to fulfill my requirement? Now at this point in time, there's nothing because everybody is just their own person. So you need this ability to connect to the agents on the other side, but how do you find them? And that's the job of the agent verse. So effectively agent verse you don't even need to see agent verse. It's just showing you that things are coming online. Actually, once everybody's using this technology, nobody should go to agent first to see anything, because there is nothing to see. All happens is your agent says I need tickets, what's the best price? What's the availability? Those who have the availability respond to you by saying, yes, we can do two tickets this price, and you get the option you say, yeah, this one. And it goes and books it because it already has control of your credit card. Because you have granted it, it gets a confirmation, you book it. But there has to be a place where these agents can actually meet. They can send a query. They can actually receive a request and can be connected to execute this task. And that's the purpose of agent verse. You have an agent which has the control of the whole objective, and then you have these micro agents which are doing micro tasks. So you have multiple tasks getting delivered, and these tasks then get executed, each one by one, either parallel or serial fashion, because unless you have the flight, you don't want to book a hotel. So once you've confirmed the flight, then you can say, okay, now I can give you options of the hotels. I have these three options because I don't need 10,000 options, because out of the 10,000 options which Google gives me, they're all useless to me because it doesn't know I don't travel anything less than four star. For example, I'm not going to stay in a place which is less than four star, but I don't want to pay more than $200 a night, right? So all this context is built into your agent, and when it queries this whole place, it needs to find you three or five or ten. It depends on you. And that very simple example. But you can add complication. You can build different layers on top of it. You want to be traveling. You want to be doing other things. So all of that needs to be in a marketplace where all these services are available and your agent can go and connect to them. And that's agent verse. 

Matt Zahab 
How do the agents or how does the AI or agents, whatever you want to call it, how does it know your personal preferences? Like, is that just through going through emails, trial and error. How notes do you feed it data? How does it pull data to know what Matt or Humayan wants when it comes to traveling or eating or whatever the case may be? How does it learn about you? 

Humayan Sheikh 
Yeah, it's all of those things. But the best thing is that you tell it because then you'll tell it exactly what you want, right? We're not trying to listen in to anybody's conversation to guesstimate what they want. Why? Because we're not trying to sell advertising, so we don't need to listen in. You want a task to be done, then you are happy to feed in your preferences. So I don't get it wrong, right? If I'm an agent, you're feeding me the information. We generally travel to people. We prefer Chinese food, we don't like some other food, indian food or something. And you can give the preferences and it already learns from it. And then it can also do reinforcement learning because when you reject a task, it kind of trains its model to understand what so if 10,000 people rejected a task, what other things they would have liked or not liked? So your agent can actually fall onto that kind of collaborative model which teaches them what to offer you when you have a request, when you have an objective. But obviously this is a complete paradigm shift, so it will take its time because you won't be very comfortable just saying to your agent, oh yeah, you can deal with everything. That's not going to be possible. So what would happen is that you will actually have to nurture your agent, train them. You have to say, here's my agent, this is representing me. I mean, people use different words. There is a distinction. But digital twin, for example, you're going to train your agent, your digital twin, and then suddenly that agent goes out and starts fulfilling the request, your objective in the right way. 

Matt Zahab 
So cool. I can't wait to all this stuff pops off and goes incredibly mainstream. I just think it'll make our lives incredibly more efficient and enjoyable. I do. I think it's going to remove a lot of friction. And a lot of the nonsense that we currently deal with. My only worry, and it's not really a worry because again, I'm selfishly here, only thinking of myself, but I believe that I will continue to work hard enough and learn what I need to in order to not be replaced by AI. But I do worry that millions of people will be jobless because there's so many entry level jobs, perhaps customer service related jobs, task related jobs that will now be nullified because this technology is so powerful, which is really cool to see. But let's take a quick moment to jump into sort of the intersection of AI and Crypto and how you guys are also moving and grooving in this. I know that you guys have announced the development of a new suite of agent based trading tools for decentralized exchanges. Give me a quick overview of what you and the team are working on in that regard and the whole intersection of crypto/blockchain and AI as a whole. 

Humayan Sheikh 
Let's start with the technology because crypto is seen as the tokens. So let's park the tokens aside for the minute. Let's start with the blockchain technology itself. Why is the intersection important? So the two main reasons which I'm kind of going to focus on here. One is let's imagine this world where all these agents are going around, right? And you ask an agent to do something and there's a transaction which happens. So this agent is going to go and say, hey, this LLM, are you able to give me this information? It takes that information. Or can you give me a prediction of like ten people have created recommender models and your agent can go to every one of them or just one of them, based on what the agent has been trained to do. But that recommender engine will have to be paid somehow. The transaction will have to be recorded somehow. And if you're going to run an open system where you're enabling everybody to deploy their models and everybody to deploy their services on this decentralized kind of platform. You need to have a ledger, you need to have some network which is open. So the first part of this whole puzzle is you need a mechanism to record all these transactions. And I don't need to convince anybody here. Blockchain has proven itself to be one of the best kind of ways to do it. Yes, it's heavy in terms of actually holding the data, but you don't have to put all the data on it. But you can monitor transactions, you can create an audit trail. So the regulator comes and says, why was this transaction done? You can go back and you can see, okay, this was transaction between this party and this party. These are wallets, right, or agent addresses. And if you really need it to disclose the information, then you are able to do it. Or you can create identities, the self sovereign identities which sit on these agents/wallets/these accounts, to actually provide that audit trail. So that's a very important part because if this space kind of really blows up and you start using agents, there will be initial problems, there will be long term problems. You need to monitor it all rather than saying, oh, well, it's a black box in a big corporate control and we don't know whose data we trained on, so we can't really give you any information or we can't really incentivize people to do anything because we're just using everybody's data for free. So that excuse goes away and then you have this whole agent needs to be interacting with each other, they need to keep a record. And if the system is going to be a huge system, it needs an orchestration kind of layer which Blockchain can provide quite easily, and it's in control of not one party. It's in control of all the people who are actually part of this network. So and they can also have economic benefit. They can have a financial gain by hosting this rather than just one corporate. Which brings me back to your concern about, okay, a lot of people are going to lose their jobs. Yes and no. Because the jobs will become different because you have the same people who are a consumer. The same people could and if the technology barrier is lowered that you can host a node and you can host this network. You can actually start generating revenue out of consumption which you're also part of. So there is a financial reward mechanism which also comes with tokens which now I'm bringing the token side here, which is the tokenization either of the network of the service, of the query of the data, of the training of the model. You could actually create tokenized machine learning models which people can use. But such a network, it's a huge undertaking for any company to kind of take off. 

Matt Zahab 
That sounds I mean how does one even get started in that capacity like that's? 

Humayan Sheikh 
No, it will be made quite easy. So you'll be able to either provide your data because you're a consumer, you don't have to give your actual identity details. You can have anonymized training of machine learning models. We do that all the time. Even now. I mean, you set up a Gmail account, you set up a Microsoft account. That's how easy it should be. You say, okay I'm going to do that and that's it. You're now part of the network. Setting up an agent should not be sitting there writing a code. Setting up an agent is defining an objective to an interface. So a user interface, could be a textual interface, could be a voice interface. You ask for something and your agent without you knowing is created, is deployed, is looking for services for you, providing those services and in doing so training a model which could actually generate revenue for you. 

Matt Zahab 
So cool. Humayun you have been a pleasure to speak with and have taught me an incredible amount today. I definitely have some homework to do. I will actually be jumping into some homework right after this. Before you go, can you bless us with a couple of hot takes that preferably AI related hot takes can be anything, the spicier the better. But give us a couple of Humayan AI related hot takes before you go and then we'll do a quick outro and wrap everything up. 

Humayan Sheikh 
Right, so I feel the AI base is going to explode a lot more and I think people should not be scaremongering. I think we should be getting engaged because that's the only way not to be kind of scared of this. We're releasing tools at the moment, which we're probably the first one kind of know this in the next coming few weeks, we will be opening up our platform which will kind of showcase most of the things we talked about and you'll be able to create integrations, carry out tasks and I feel that's where the world is going to go. So if you're going to put your kind of trust into something which is going to be the compute is going to be coming, it's going to be commoditized, but there's a huge value in deploying maybe some GPUs at home and getting them ready for machine learning models to be trained. Because there's a decentralized wave which is coming where you could generate revenue using that. So I would say that's a very interesting approach and angle. One should kind of get involved in it. But just to wrap it up, there is quite a lot of hype and not everything is as useful as it might appear in the first instance. So, yeah, be careful out there. 

Matt Zahab 
Well said, mate. Humayun what an episode. Really appreciate you coming on and honestly motivated me as well to get my ducks in a row and keep putting the hammer down and getting better, doing more research, practicing fooling around and just becoming more immersed with AI. So when everything pops off, I will not be left behind. And to all the listeners, hopefully, Humayun also motivated you as well to get your ducks in a row and make shit happen. But thank you so much for coming on, really appreciate this. Before we let you go, can you please let our listeners know where they can find you and Fetch.ai online and on socials? 

Humayan Sheikh 
Yeah. Thank you, Matt. It was great questions, and I always enjoy speaking to somebody who's kind of really engaged in terms of finding more information, as you say, fetch.ai is the website. Not everything is there. We have some docs in a separate link up there. If you go in, you'll see the links, but for those who are not overly technical, we will be releasing plenty of things in the coming weeks. Follow us on Twitter. We're also on Telegram. Send us an email. If you want to get engaged, if you want to write some code for us or be part of this new paradigm shift that is coming, join us. We have Discord Channel, we have a GitHub. You can get engaged in all those ways. We're really happy to hear from developers, non developers, people who have businesses, have business ideas they want to deploy. We're open and ready to kind of interact with you guys. 

Matt Zahab 
Love that. Thanks again, mate. Really appreciate it and cannot wait to have you on for round two in the near future. 

Humayan Sheikh 
Thank you for listening, guys. 

Matt Zahab 
Folks, what an episode with Humayan Sheikh, Founder and CEO of Fetch.ai, dropping knowledge bombs left, right and center. If you like AI and you like crypto, which most of us listeners probably do, this episode was an absolute treat. Huge shout out to both teams for making this happen. If you guys enjoyed this one, and I hope you did, please do subscribe. It would truly mean the world to my team. And I speaking of the team, love you guys so much. Thank you so much for everything. Justas, my amazing editor for the pod. You are the GOAT. Appreciate you. Thank you so much for everything. And to the listeners love you guys. Keep on growing those bags and keep on staying healthy, wealthy and happy. Bye for now and we'll talk soon. Ciao.