Igor Savchenko is the CTO and co-founder of Scalr, a remote state and operation backend for Terraform and OpenTofu. He is also one of the co-founders of OpenTofu. During our conversation with Igor, we explored different DevOps tools, talked about infrastructure challenges and security. We also dived into MLOps, LLMops, and AI Agents, discussed the future of infrastructure, made some predictions, and talked about whether AI Agents could replace DevOps engineers.

Transcript

Please note that the transcript is AI-generated and may have errors.

Igor: I still remember this week, it was like one long, long day. We started as a product who do some automation and UI for the cloud services. You found some really, really like old school software there. Let's not forget about CF Engine.

Dmytro: This year will definitely be the year of AI Agent. We will see a lot of interesting solutions based on AI.

Igor: AI will replace everything and everyone and like we're done.

Dmytro (00:40:17): Hello everyone. I welcome all on the first episode of the Dmytro Spodarets podcast. In it, we explore the world of technology, cloud solutions, infrastructure and AI, as well as business, education and sport. I am really excited to kick off this podcast with amazing guest Igor Savchenko. Igor is CTO and co-founder of Scalar, a remote state and operation backend for Terraform and OpenTOFU. He is also one of the co-founders of OpenTOFU, an innovation open-source project in infrastructure as a code ecosystem that was created by community after HashiCorp changed Terraform license. During our conversation with Igor, we explored different DevOps tools, talked about infrastructure challenges and security. We also dived into MLOps, LLMops and AI Agents, discussed the future of infrastructure, made some predictions and talked about whether AI agents could replace DevOps engineers.

This is the Dmytro Spodarets's podcast. Subscribe to us on YouTube, Apple Podcasts, and Spotify. If you like my podcast, I would really appreciate for 5-star rating. And now, dear friends, here is Igor Savchenko. Let's start our small discussion.

Igor (02:15:28): Yeah. Hey, Dima. Happy to be here. Let's see how it goes, yeah

Dmytro (02:20:70) Let's start from the first very interesting question: Will AI replace  DevOps engineers? Because this year many analysts say that will be the year of AI agents so automation is going very quickly in all areas and what you think about infrastructure?

Igor (02:48:85): It's a good question. I mean, right now everyone talks and thinks about AI and I'm asking the same question myself every day thinking about it. And I think there's a lot of hype there that like, "Yeah, we'll lose our jobs and AI will replace everything and everyone, and we're done. Skynet etc." So, I don't think that... I mean, maybe eventually in 10, 15, 20 years this will be the case, but I think right now obviously the most common use of like real use of AI I see around this like repetitive boring jobs and tasks like "Hey give me the summary" or like maybe do some simple tasks so stuff like probably like in the DevOps world, stuff like what usually junior DevOps would do, like, hey, analyze this logs, find some… things there, like aggregate those metrics etc. And I think in time, when AI will have agents and AI will have access to more real-time data, to more different contexts, operational contexts that organizations have, more it will be able to do and help. But honestly, thinking that the AI agents will replace DevOps in the near future, I don't think that this will be the case. I'm just like, maybe I'm too pragmatic or pessimistic, I don't know, call it that way. But when I'm seeing how in most organizations deployment and DevOps and platform teams operating today, when you need to get like 10,000 of approvals and checks and reviews, etc. I'm not sure that AI will quickly replace this bureaucracy and those like guardrails in large organizations.

Dmytro (04:54:17) Yeah, in large organizations, they have a lot of bureaucracy. But maybe large organizations also for speed up this bureaucracy also start to use AI.  And in some cases, we will have, in some future, we will have the situation when AI do all and engineers will be the folks who manage this AI. 

Igor (05:21:53):  That could be, yeah, like absolutely that way. So like we'll all become like managers. But like in the near term, I see that use of AI can significantly boost productivity and efficiency of, let's say, DevOps and platform teams. So with the same amount of human resources, you can, and I hate to call it human resources, but it is what it is. But when you have a team of five people, who can do way more than it was without AI, this is, I think, we'll see more and more. So in the near term, I see AI not really replacing DevOps, but making DevOps more efficient, more productive, less mistakes, or even things like troubleshooting will take less time to find the root cause of some incident or some downtime, like an outage, and quickly resolve it.

Dmytro (06:20:14) Yes, I think it will move in this way and help us to be more productive. As we start talking about productivity, let's talk about Scalar, its history, and how it helped DevOps engineers to be more productive by automating different tasks.

Igor (06:42:73): Sure. So, at Scalar, we started a long time ago in 2009 or 2010, and we started exactly at the era where cloud computing became a thing, when Amazon released their first services like S3 and EC2, when there was only API, there was no console, there was not any automation, and we started as a product who do some automation and UI for the cloud services. And on top of that, before that, Amazon had like 10,000 different services when there was only compute and storage, would do like some that would build some services on top of their like building blocks and this is how we started and we started the whole part of the paradigm of like click ops or and created the first product CMP called cloud management platform. In 2019 this is where Terraform became popular like probably before but we started to see more and more Terraform and our customers and prospects asked us like hey we don't want to do things like in UI and clicking things like it's not really efficient a lot of problems so like we want to automate things, we want to write everything as code, etc. And this is when we pivoted to a product family, what we call TACO, Terraform Collaboration and Automation Software. It had nothing to do with like eatable tacos, but to fund. Yeah, and now our current product helps you to, you and like organizations to like to use Terraform at scale. where things like platform team would create, like build a platform top of scaler, that helps other developers, teammates, and organization to deploy things in a secure way at scale with access controls, with audit, with a whole bunch of other things that you need at scale.

Dmytro (09:02:29) Amazing. And you not support only Terraform, you also support OpenTOFO. And this project changed the name. I remember that its first name was OpenTerraform or something like that. OpenTF, yeah. I remember when we worked in one co-working, you with other companies started this project when the HashiCorp says that they close the code of Terraform. Yeah, how it was?

Igor (09:39:95):  Funny story, yeah. I still remember this week was like a one long, long day. And yeah, we got a notification and all of a sudden started to talk about the Terraform switch from like MPL open source license to source available license, which is not open source. Business use license which basically overnight prohibited like any competition in Terraform space and like we decided that hey like there should be some open source alternative and a lot of like in our company and another company who started this initiative there's a lot of like truly open source into that so like one thing is like build business on top of open source but another thing is to contribute back to the community and believe in open source, like early days, HashiCorp. And we got a couple of phone calls, a couple of conference calls, and decided, hey, let's continue to have Terraform as an open source alternative. And this is how we started OpenTF initiative. And originally, we actually didn't want to fork. We created a pledge, and I think thousands of developers, hundreds of companies signed a pledge asking HashiCorp to change this stance and get back to the open source roots. Because a lot of people contributed to Terraform over the years. And yeah, our ask was like, hey, leave Terraform. Terraform Cloud is a commercial thing. Feel free to use money. you have a competitive advantage because you control the Terraform product roadmap, like in development, but then like leave open source itself like as open source. And we got like no response. So we decided to fork it. And then the first challenge was appeared, which is like, hey, how are we different from Terraform, from HashiCorp? And what will stop us to do the same thing. This is where foundation come into play. So we decided to become a part of Linux Foundation because, interestingly enough, in parallel with our initiative, Linux Foundation scrambled some conversation on what to do because CNCF and Linux Foundation, they're using Terraform internally for a lot of projects and they have a policy that they need to use like open source products, they cannot use commercial products for lots of things. And it's weird that some of the open source, good open source projects, a lot of them would use commercial licensed open source, commercial business use licensed source code because technically nobody other than Hashi's lawyers know who can or cannot use. And it's like potentially it's like a legal exposure and a whole bunch of other reasons that I'm not illegal, so I don't understand. And we needed to win back trust of the community and build this trust, not to win this trust. And this is why we decided to become part of Lynx Foundation and eventually CNCF. And Lynx Foundation was super happy about it. And we created a trademark, picked the name that was legally acceptable. We can use OpenTF. So if we use OpenTF in theory, we can be sued by this. So we found a neutral name and give all the rights and permissions and trademarks and intellectual property back to Linux Foundation. And then we, with other vendors, sponsored the project by hiring the full-time engineers to work solo on OpenTOFU based on community requests, most requested features, bugs, etc. And this is how it all started.

Dmytro (13:49:45): Yeah, it's cool. And for now, what is different between Terraform and OpenTOFU?

Igor (13:56:44): I mean, the biggest difference, I think, is OpenTOFU is a fully open-source license. It's still like Mozilla Pro Public License, etc. About features, so we started as a first major release, it  was fully compatible and still at current release they're all fully compatible with 99% of Terraform codebase feature-wise. But we started as maintaining feature parity and compatibility, so it was not that different. Because again, we still are thinking about how people migrate from Terraform to OpenTofu and less non-compatible things so they will have easier to migrate. But then over their next major releases, we added features that were for years, some of them for 10 years, requested by the community but never always ignored by HashiCorp, things like state encryption. So now when you use Terraform, your state is not like a plain text file with a lot of secrets, it is actually encrypted. Yeah, recently our, I think it is 1.9 release, we added a 4-H loop for the providers, so you can easily walk through multiple instance providers, provision and things without much code duplication. Like if you want to deploy something in multiple regions, for example, you just define one block and use 4-H to deploy in multiple regions. There are many, many use cases like this. And before that, there was another limitation of the Terraform, which is you cannot use variables in modules and provider definitions and configurations. This is, I think, was like top three, in top three features that are requested by the community, and we also added it. So we continuously listen to the community. It is good thing that we had like a good history of what community wanted us to do and what like Hashi never found like time resources or will to do for whatever reasons. This is our focus and we'll see what future will bring to us.

Dmytro (16:16:48): For today, how community start migrate to OpenTofu? Is big companies still use Terraform or start to look at OpenTOFU and use it?

Igor (16:33:59): Good question. So, we have only anecdotal evidence because, hey, we don't really collect any data in OpenTOFU, no telemetry and other things. So, we actually don't know the actual magnitude of use, but from within my company, for example, we see a lot of. Fortune 500 companies, like big enterprises started to move to OpenTOFU and doing this pretty efficiently and quickly. And then we have a lot of customers who come to us and started to use OpenTOFU from day one. So I think it's still like after a year, we're getting this question like, hey, what about providers? And unfortunately, there's still some misconception, misunderstanding that OpenTOFU is not very different from Terraform. I would say in 99% of cases, the migration is as easy as you just switch the binary from Terraform to OpenTOFU and it just works, especially if you haven't used this new Terraform-specific features like stacks, for example. So in most of the cases, migration is very, very easy and quick, and it's the same providers. They're still open source and there's a highlight to cool that they will continue to be open source for different reasons. And yeah, I think in cases that we see why migration can be slow or even some companies not willing to migrate is not really technical reasons. It's about risks and some politics. So for example, first of all, we're a new open source project. So we need to build a trust that, hey, we can maintain, we can fix the bugs. So in a lot of companies, when they start using things, they use it for years. And nobody wants to make a decision to migrate the tool that then suddenly they cannot use for whatever reasons. Then another one is support. So like, yeah, with Terraform, you have like enterprise support from Hashi. And it takes time to build a number of vendors, who offer the support for OpenTOFU, commercial support. And companies like Terragrant, for example, yeah, Grantworks, sorry, Grantworks, who are creators of Terragrant, I think they're doing an amazing job of supporting OpenTOFU and deploying OpenTOFU for their customers. So, yeah, it takes time, and we continuously see a growth and continuously see the companies who migrate, and more and more companies trust us. And I think it's a matter of time until most of them will break OpenTOFU because guess what? There's no reason not to migrate.

Dmytro (19:31:94): Yeah. And what about cloud providers? They have their own models for Terraform. If they migrate or change the models for using directly in OpenTOEFL?

Igor: So probably you mean providers.

Dmytro: Providers, yeah.

Igor: Cloud providers.

Dmytro: Cloud providers, yeah.

Igor (19:52:87): So cloud providers, so first of all, they're also Mozilla public license, but they're open source, and we don't believe, it's current understanding that it's impossible to change the license because when the cloud providers were developed, Hashi didn't ask for CLA. So technically, if they want to change license for any of the provider, they will need to get the permission and approval for every single contributor, who ever contributed to the provider, which is like, I think it's a heavy lift. Now, in terms of compatibility, it's using the provider protocol and it's the same. And at OpenTOFU, we're doing a pretty good job maintaining all the new features and changes in the protocol. So all existing cloud providers and providers out there can be used in OpenTOFU as is without need to change anything.

Dmytro (21:01:78) Yeah, OpenTOFU is one of something new in this era of growing DevOps and infrastructure. Also, on the market we see different other tools like SDK, something like Pulumi comes. What do you think about this solution? How they will change the infrastructure world and DevOps?

Igor (21:33:14): You know, it's a great question. Before that, we had cloud-specific things like CloudFormation, Bicep from Azure, then we have Pulumi, then we have Terraform SDK, and a whole bunch of other things. And there is no ideal solution. And every single time, I know this mentality, when like hey there's like nothing ideal so i'll create a new one which will be ideal but it's ideal for me and now we have like another one and another one or like from cloud providers it's about control so like like i believe that like cloud providers for example they have like cloud formation and biceps on their own tools because they want to control like how uh their infrastructure has been used and like they don't want to be like they don't depend on other vendors on like how the structure can be deployed because like other vendors can do like all sorts of things and create a preference so for example let's say if won't be if we won't be part of like let's say Linux foundation and we can do whatever we want to do let's say Azure would like sponsor certain development to make their provider faster or better in a world where net neutrality is no longer a thing, like it's absolutely possible. So theoretically it's possible and that's why I think cloud providers have their own solutions. And again, like some people think that like, yeah, why we need to learn like new language, Why can't I just write my infrastructure definition using JavaScript, TypeScript, Python, Go, PHP, whatever Java language you want? And yeah, some prefer that. But I still believe that de facto, I think Terraform and OpenTOEFL are the market leaders. I think like 80-90% of the infrastructure has been deployed through them in one way or another. All like we're moving towards this direction. And this is another one why it's so important to have like an open source solution, because like it should not be controlled by one entity, one company because everyone can do whatever they want these days. Unless you're part of foundation.

Dmytro (23:59:13): Yeah, we have different solution and it depends what we want to solve, what's the problem. Because sometimes SDK using more better if you want to have control of your infrastructure in your application. But it's not very good, for example, if you use in same way as you use Terraform. So write some specific application which will be the same as Terraform.

Igor (24:34:55): Yeah, I don't believe that there are bad solutions. Some believe that some languages are bad, like, hey, Python is bad, write everything on Go, hey, Go is bad, write everything on Rust. I don't believe in bad languages or bad solutions. There are places for all of them in some way or another. And I'm pretty sure there are companies and problems out there that's perfectly well sold by Pulumi or by CloudFormation. But then when we talk about all the variety of different use cases and variety of different constraints and environments I think tools like Terraform and OpenTOFU they are the absolute leaders.

Dmytro (25:15:80): If we speak about solutions such as old tools like Ansible, Puppet, Salt, companies are still using them for a long time. How do you think they are relevant today? Companies often use clouds and for configured clouds infrastructure it's better to use Terraform or OpenTOFU. What do you think about this?

Igor (25:43:67): You found some really really like old school software there. Let's not forget about CF engine. Yeah, look we still can use like a laptops to put a nail on the wall right uh you can do this yeah absolutely You can but i'm not sure that it's designed uh for this thing the same like Chef Salt Puppet Ansible. We need to remember like how they started they started to configure uh stuff within the application like on a operation system level uh when you can configure for example fleet of VMS, fleet of machines and manage them, deploy like a configure operation system, like install patches and they're very good at it and even these days I'm not sure that there are like solutions better, than that when you need to manage fleet of like thousands of virtual machines and clusters etc. Then before Terraform I know that like things like Ansible and Chef they expanded to like hey if you can't deploy certain applications within OS, why you cannot deploy things in cloud. And they added this use case when you can deploy everything, like deploy infrastructure in cloud, and then within this infrastructure configure it. But it's a solution on top of existing solution for different problem. It's like, yeah…

Dmytro (27:13:28) But they also have some functionality for deploy some virtual machine in the cloud, and they try to, in some part, duplicate the functionality which you can do using OpenTOFU, for example.

Igor (27:31:55): Of course you can do this, but I'm not sure if we talk about efficiency and productivity and maintainability and support. Like, hey, simple example, like the guy worked in an organization and he's like Ansible, got there and he can do everything in Ansible. And he wrote like a thousand lines of code of deployment where everything was Ansible. And then he happily retired and decided to spend time with his family. Now you need to hire like new people to replace him or like you grow the organization. I mean, like good luck finding all these devops who are willing to manage infrastructure that way.

Dmytro (28:11:89) Yeah. And it's a very funny situation where all knowledge of infrastructure has only one man, and then he leaves the company, and the company, oops, we don't know what to do. 

Igor (28:29:12): It happens all the time, and that's why it's possible to do it that way, but you need to think about the future, how you need to maintain it. You need to make sure that when you have new people come to the company, they can actually understand and maintain and write this and continue to maintain this thing and improve and evolve. And if OpenTOFU solely focused on making infrastructure deployment fast and efficient and secure and declaration of it like simple and very well understood, then if the same thing is possible with, let's say, Ansible, but it requires more time, more lines of code it's harder to understand etc and then like when you the infrastructure becomes like more and more advanced when you started to use all this whole bunch of services microservices and when it grows like exponentially which happens all the time uh like what happens on like not a day one but day two then five then day ten uh you need to think about it.

Dmytro (29:34:15): Yeah I think these solutions were good for some local infrastructure for on-premise servers because using them you can configure these servers as you want and this solution was created for this. So yeah.

Igor (29:52:91): And I'm honestly still seeing a lot of companies who is in Terraform to manage the infrastructure and then Ansible to configure it afterwards. So they actually Ansible and Chef and other things are super nice to actually if you're still using VMs these days or bare metal servers and you provision them through cloud or through Terraform it's still like a good solution that works perfectly well with OpenTOFU and Terraform to do the data operations.

Dmytro (30:25:07) Yes, to take companies. I remember there was a hype when the company started to go to the cloud and all infrastructure they want to have in the cloud. But now we see, I think, on the market with the large enterprise, understand that the cost, the bill of the cloud is very high and they start to migrate on their own data centers to provision their system, their infrastructure there. And in some cases, they try to use different clouds, and we have multi-cloud and hybrid infrastructure. So what challenges do you see in this area, and what do you see in the market?

Igor (31:19:69): So first of all, yeah, your cloud build can run out of control super fast, and that's why it's important that in early days you set the right proper mechanism and proper guardrails how this can be utilized. And then I believe that going multi-cloud just to save cost, I wouldn't do it honestly because I think it's a myth that you can easily go multi-cloud and code the same definitions will work across multiple clouds, because honestly that's bullshit. And when going multi-cloud your complexity of your infrastructure, complexity of your code that manages this and knowledge it's all pretty much exponentially gross, it's not even two-fold. Now it's like five-fold. And the only like, like I seen the companies go multi-cloud, not because they made a decision, but through a bunch of acquisitions, like, hey, I'm sort of like, I'm like, let's say Amazon user, but then we acquired this another company, and historically they're using Azure. And then we acquired another company, and then historically using Google. And this is how like organizations become like multi-cloud. Very common thing. and yeah the same what happens when this organization managed this business unit who acquired last year like they're using Ansible for deployment and this one using Terraform and another one using CloudFormation and now you have a huge mess that's very hard to deal with

Dmytro (33:08:41): Yeah and other good topic for clouds and for multi-clouds and hybrid infrastructure it's uh security What challenges you see for now on the market related to security? 

Igor (33:27:25): Yeah you know like it's funny uh because like the recent debacle with Deep Seek AI when they left the entire like access to the database exposed to the internet where you can query all the like no privacy nothing like this is a great example i don't know how they deploy and how they maintain security maybe they don't at all but that's a good question and again this is why you need tools like Scalar like others when you have a lot of people a lot of entry points who have access to provision of a structure unfortunately people forget things or they don't know about things also like some default somewhere can be changed. So you can do this through code reviews for example this is like first line of defense then it can done with different tools like check off GFSEC open policy agent to prevent bad things to happen down the line but in organizations somebody need to configure this safe environment where you can write your code deploy it and you'll give like and the platform system will give you some response like, "Hey, you forgot to change the visibility to this bucket to private so it won't be exposed to the entire internet. This is why I think today you probably heard this term of platform team. This is why platform team exists is to build a platform for internal consumers, developers that you can use to provision wherever you want in a way where it's secure, where a cost is under control, but at the same time with a lot of self-service approach, where you don't need to go through a whole bunch of approvals for every single change you want to make.

Dmytro (35:24:27) Yeah, the way to have some internal platform team and some internal platforms where you can go and work quicklycreate some environments with all rulers and you don't need to get the approval. It really speeds up all development process. AI is everywhere today and all companies want to use AI in their solution. Whether it's classic ML models or popular LMs or new AI agents, So, DevOps need to build the infrastructure not only for solution, but also for AI specific. What do you see from OpenTOFU or Scalar side in this way?

Igor (36:21:81): Actually, maybe that's a good question for you, because at Scalar, we haven't seen much of this yet. But what do you see on the market in terms of how organizations who want to do AI and agents, how do they deploy things and make it secure?

Dmytro (36:46:21): Yeah, I think it's a great question and a good topic on which we can spend a separate episode. But let's quickly talk about how companies today work with AI models. For managed infrastructure companies today using MLOps, this is the same as DevOps but for machine learning models. The new version of MLOps, it is LMops. As you guess, I think it's about operating and work with large language models. MLOps, it's not on the list of these tools. It's the same as DevOps. It is a practice. It is rules. It is agreement between teams, how they work, and what the process they have. But from tech stack side, it is a really big list of different tools and solutions which need to have your infrastructure to operate machine learning model. because it's only… the machine learning model it's small piece in this big AI solution uh so what we need to have in our infrastructure for operate and create the models? First we start from data so we need to have the place where we store data the tools for labeling data, some tools for versioning of the data. Then when we work with AI models we have two different types of environments. First environment is experimental environment. On this environment we create the model, we run our experiments, we track the parameter of the models, so a lot of stuff and we need to have a big compute for this environment. From other side, second, it's production environments and in production environment it's more about operate machine learning models in production do deployment do monitoring also we need to have some pipelines which help us to get feedback from our model because from one side we get feedback automatically from logs. For example, from other side we need to get the feedback from real user and check is model give to them correct result or not and collecting all this data then machine learning engineers data scientists improve them the model and publish new version and if we talk about LMs, then we need to add more different tools specific for LMs first of all it is some frameworks like lan chain llama index which help us to build the interface between LM and our solution and we built our solution like RAC system or other based on OMS. We need to have and work with vector databases. We need to have the tools which help us to do prompt engineering and prompt management. It's like we need prompt engineering, it's like we do some experiment with our prompts which we send to LMs and use in our system and check how it works. So we need to have this part in ecosystem. Of course, we need to have the tools for LM evaluation and of course, we need to have the tools and tools for provide security of for LMS because nobody don't want to share with some public LMS if we use some provider some critical and private information and if we use internal LMS and this LM have access for some private or very critical information and some user of the system don't need to have access to this data. So we don't want to provide this data through LLM. So we need to monitor and analyze what data LLM gives to the user.

Igor (42:39:43) So for what I understood, like ML Ops or LLM Ops, it's not very different from standard DevOps practices. You still need to deploy stuff. You still need to make sure that it's secured. I have no idea how to make sure that your bill won't run through the roof by running all these billions of tokens, etc. So, not very different. And you probably noticed everyone talking about AI agents these days. So, I have some minimum understanding of how models work, but can you tell me more about AI agents and what's required? How do you deploy AI agents and can we reuse the same DevOps, MLOps, AI agent OPS practices to deploy agents?

Dmytro (43:33:38): Yeah, agents, it's a really hot topic today. And let's see what infrastructure and what resources we need to create and deploy AI agent. So AI agent is autonomous system which use AI to execute different tasks, to make decision and to interact with different environments and other systems. Under the hood we have several components. First component as you understand it's AI. We have some of LLMs. It can be our own LLM which we deployed in our own infrastructure or we can use some public API like OpenAI or Anthropic or other. Second part and second component is orchestrator, which do orchestration of our workers. And for this, we can use different frameworks, which are, I think, everyday community create new framework for AI agent. And second part it's workers. Some of these workers we built using this new framework. Some workers we can build by own. For example, for interact with different system. So it will be some customizable workers, like small scripts so it's very similar to deploy AI model and then deploy some orchestrator, like a traditional application and then deploy a lot of small small applications, small scripts and we need to manage all this and also we need to have monitoring we need to have tracing we need to provide and organize very good security because we don't want again to lost and provide to anyone some privacy information yeah so it's very same to deploy and work with AI models plus work with a lot of different small scripts and one application which also have different worker scripts and manage them. Of course, additionally, we need to have some memory, we have some storage and so on, but this will depend on the agent what it should do.

Igor (47:02:72): Fascinating. So in a nutshell, what I understood is an agent is just my bash cron job on the server. That's quite so funny.

Dmytro: Yeah, it's absolutely that.

Igor (47:16): Fascinating. The more scripts you have, the smarter your agent is and more things he can do. Yeah, that sounds awesome. I have a million questions, but I think we'll need to leave them for our second episode of this podcast.

Dmytro (47:29:39): Yeah, I think we can think about some future and to do some prediction and then meet again in, I don't know, in one year and see what happens…

Igor: What happens, what change. 

Dmytro: Yeah, what happens what change.

Dmytro (47:51:68): So if you speak about the future, what do you think near future will be in infrastructure area?

Igor (48:02:06): In infrastructure area, um i mean like there will be more different AIoriented services uh i know that like different cloud providers already started to provide your LMs service like you can spin up uh Lama and and other LLMs like on Google, on Amazon, on Azure. This will continue to happen. But then I believe that there will be some agents as a service that use different models, and you can, with some small configuration and inputs, you can easily deploy those agents in the cloud. And then models like recently OpenAI, I released this OpenAI operator, which is like, I think it's pretty cool. I mean, like, you can do some, like, simple tasks already and have some trouble, some more sophisticated tasks. But again, like, if somebody would tell me, like, I don't know, five years, ten years ago, that this would be a reality, I would say, like, hey, are you super optimistic? I think, like, a lot of investments right now go into AI space, and you recently heard probably that, like, yes, I plan to invest into, like, a Stargate project, like, half a trillion bucks. China is also like trying to keep up and it's like a new competition on like who like I think, who will be leader in AI like this country will dominate the world.

Dmytro (49:32:08): Yes, this year will be definitely the year of AI agent. We will see a lot of interesting solutions based on them. Also I am sure that we will see a lot of interesting hardware solution specific for AI. Today, everyone mostly using GPUs from NVIDIA. But if you will look at the market, other companies, like AMD trying to catch up with NVIDIA. Also, a lot of startups create their own specific hardware, which optimize to run and train the AI ML LMS models. So it will be interesting here in this area also. From other side, I'm sure we will see a growing number of small language models because for run and deploy these models, we don't need to have so big infrastructure. And it's very good to optimize the cost for infrastructure. And from other side, it's very good to fine-tune and improve the model for some specific task. And if we look at AI agents, we don't always need to have large language models in the basis of AI agent. Sometimes we need to have the model with some specific knowledge and this can help us to optimize and also speed up the work and the speed and performance of AI agents. And it's open also the market for deploy agent to some specific not very big hardware, where we can, for example, have some limits of the power. Because run big GPU also we need a lot of the power. So we'll see how it will go and what we will have in the end of this year.

Igor (52:20:47): And this is a good point. If I need to do basic math, then I don't need an LLM that has the entire Wikipedia in it.

Dmytro (52:29:05): Yeah. Good. Thank you that you was today with us. And so thank you for this interesting and good conversation.

Igor (52:43:36): Yeah, likewise, Dima. Thanks for inviting me. And it was like very interesting. I think we just barely scratched the surface of the topic. And yeah, It will be more to come. Thank you.

Dmytro: Thank you.