EU AI Act Secrets Revealed

Intelligent Document Processing Compliance, from Stone Tablets to Digital Docs
May 8, 2024
APRA (American Privacy Rights Act) Explained
May 29, 2024

In this episode, host Bart Farrell and Sylvain Kalache discussed the new European Union legislation, the AI Act, with Joseba Laka, the digital director at TECNALIA Research & Innovation. While understanding the AI Act might be intimidating, Joseba is unveiling the few elements the tech industry must understand.

A key element of the regulation is its aim to regulate artificial intelligence through a risk-based approach with a classification framework. While the implications for businesses will be plenty, there will be a strong focus on the transparency of AI models. The EU AI Act will impact American companies and influence AI development and investment strategies but position itself as a necessary framework to manage AI applications’ ethical and safety risks across different sectors.

Key takeaways:

Here are five actionable takeaways from the conversation:

1. Understand the AI Act: companies must understand EU’s AI Act risk-based regulatory framework. More specifically, in which risk category their product fit in.

2. Invest in Trustworthy AI: As the demand for transparent and explainable AI grows – and will become a must – investing in non-black-box AI technologies becomes crucial. Companies should focus on developing and using AI systems that are not only effective but also align with future regulatory expectations for transparency and accountability.

3. Anticipate Changes in Investment Patterns: With the AI Act providing a clearer regulatory landscape, businesses might see shifts in investment trends. Investors will likely favor AI initiatives that are compliant and future-proof against evolving EU regulations.

5. Strategic Decision Making for AI Development: Decide strategically whether to build AI solutions in-house or buy from vendors. With the increasing importance of compliance and control over AI systems, building solutions might offer better long-term benefits by ensuring they are tailor-made to comply with stringent EU regulations.

Read the transcript

Bart Farrell 0:00
On this episode of the Data Defenders Forum, I’ve had a chance to speak to Joseba Laka, who’s the digital director at Technology Research and Innovation Centre in the north of Spain. We discussed the new European Union legislation called the AI Act, which aims to regulate the use of artificial intelligence in various domains. The Act takes a risk-based approach, categorizing AI applications based on risk and imposing regulations accordingly. We spoke about the challenges and opportunities that the Act presents for different businesses in terms of transparency and trustworthiness of AI models. We also covered the fact that this will have a potential impact on American companies when it comes to providing AI solutions. Let’s take a look at the episode and see what we uncovered in the conversation. Welcome to the Data Defenders Forum. Today we are joined by someone who is quite near where I am geographically speaking, his name is Joseba. Joseba, welcome.

Joseba Laka 0:50
Hi, how are you?

Bart Farrell 0:51
Great. Thank you for joining us today. If you’d like to just give a quick introduction about yourself, your background, and the things you’re working on nowadays. That’d be great for our audience to get to know you a little bit better.

Joseba Laka 1:01
Of course. I’m a software engineering engineer. I’m working at research institutes applied to research in Scala Tecnalia, in Spain, and we are doing many, many different things from industrial R&D to health, mobility, pharmacy, all kinds of domains. And one of these domains is obviously artificial intelligence. So I’m the director of the digital units at Tecnalia, and I’m involved very heavily in artificial intelligence-related topics. Fantastic.

Bart Farrell 1:35
That being said, the topic of our discussion today is about the new legislation coming out in the European Union, the AI Act. For people that might not know anything about this, can you just give us an intro to what it is? What it’s trying to tackle?

Joseba Laka 1:50
Yeah, I think it’s fair to say that we live in the Wild West in terms of artificial intelligence. There are many standards for digital systems out there. There are quite a few laws across the world about many different topics in Europe, especially we have few laws that have been created in the last few years about digital markets about privacy, they are very well known GDPR, and so on. So before COVID-19, there was a kind of discussion in Europe, among industries, politicians, research actors like ourselves, that there was a need to have something about a framework and more than a regulation, artificial intelligence. So the debate started long before the COVID-19 pandemic. And I think it was to disputes. One, that the first piece of document was created with the influence of many different stakeholders across Europe. And from 2021 to today, what that has been is a very long and difficult negotiation between or among different actors, mainly the Parliament, the council, the different industry associations, the different R&D actors, about the law that is almost being copied is going to be signed very quickly soon. So maybe by the time you see this podcast, it’s already in place, hopefully. And actually, it is a law that is not regulating systems or technology, yet all the approaches about risk. So it’s a risk-based approach. Where the law what pretends is to be future proof law. So it’s, even if we have new artificial intelligence technologies coming, becoming real. The idea is that since this is going to regulate the use cases, more than the technology itself, it will be future proof. And it will be the basis for the development of artificial intelligence in the future in Europe.

Sylvain Kalache 4:02
Thank you. So what’s very interesting, as you call it, it’s not really a regulation, but more of a framework. Could you dive a little bit into this, what’s its framework about? How is it built and how is it working? Yet,

Joseba Laka 4:19
but actually, what what what is going to be signed now is that regulation is a law is a law that will be applied automatically all across Europe. So it’s not the kind of law it’s not a directive, that’s later on each member state has to create a law a specific law to follow the directive, this is a regulation which means that it is meant to be a law and it will follow the rule of law. So it will be applicable all across Europe, but it will be applied state by state so it’s not I mean, laws in Europe are the member states are the ones that actually implement the law. But

but this is coming from a previous previous works about creating a framework of what is artificial intelligence, the very first thing there is a very deep Go. I mean, it’s very difficult to predict regulation, when you don’t have a clear definition of what it is and what is not. And then secondly, and this was I think it was a very key agreement, not to regulate the technology, but actually to regulate the applications of the technology. So the context, the use cases where it is applied, where is shouldn’t be applied. And in that way, I think there are many people that really love the regulation, people that are very narrative about the regulation, I think it’s a regulation that was needed. And I think that is actually the future proof regulation, because what it what it says is a contexts where you are not supposed to use artificial intelligence at all contexts, where artificial intelligence is considered to be high risk, and that means that there has to be a level of control high level control on on the on the models, the EDA, the data, everything, then there will be a another layer of limited risk, where, let’s say that the transparency will be required, another will be a number of applications of artificial intelligence, where nothing will be required except of voluntary, you know, transparency ethics like this. But I think that this sets a very clear scenario or context, specifically specifically for for private companies, so that they know that if they invest heavily in artificial intelligence in the upcoming years, they will know that those applications, those investments will be future proof.

Sylvain Kalache 6:42
So it’s basically a four folly yells framework, right? I’m correct, exactly,

Joseba Laka 6:48
I mean, we have unacceptable unacceptable risks. So these are applications that violate the fundamental rights and values and so on. So, this is a provision then they will have high risk that these will be applications that have impact on health, on safety, anything that is applicable to safety, high risk, fundamental right risks in the European Union. And so, there are a few application sectors that are they have been heavily discussed whether they were high risk or not that finally they there will be high risk. And then we will have the level where transparency is required. So some limited risk. So where there is a way to manipulate people to create exceptions, so for example, chatbots, deep fakes, then artificial intelligence generated content needs to be labeled as artificial intelligence. So that’s the what the one that is watching this podcast knows is myself, I’m human and AI generated, and then there will be a layer of minimal or no risk at all. So for example, the application of artificial intelligence for spam filters in email, or for applications in video games and so on, they actually they won’t, they won’t have a very specific regulation. And what it has been also a coincidence is that when all this regulation was being drafted and negotiated on phones among different actors, generative artificial intelligence made this huge boom with the open AI, so on. So, this was added, I would say a bit artificially, but right in time, because it was added another perspective, that is that foundational models, such as the big LLM and other large models that we are using, and we are developing very now, they can produce what is called general purpose artificial intelligence applications. And this general purpose AI not to be not to be linked with a artificiality at non interest. That’s another topic AGI that is a general purpose artificial intelligence applications or systems, they could end up being high risk, they could end up in unacceptable, or they could simply have to comply with some transparency requirements. So it will be all a matter of the application context, and not the technology itself.

Sylvain Kalache 9:19
Yeah. So it’s in the approach is very smart, because the technology is so complicated. Well, as you as you said, with large language model engineering, AI, well, just getting started in understanding the potential and the applications. And so with coming with this high level risk centric framework, you ensure that it’s it’s pretty much future proof, as you mentioned at the beginning of the conversation, so hopefully, hopefully, hopefully, yeah, yeah. Probably within our lifetime, a few generation. Yeah. So what can you tell us, you know, so, there is this, this law that being You know, that should be signed soon as it’s centred around the framework, what businesses will be impacted by by?

Joseba Laka 10:13
Well, it mainly all kinds of businesses, and also public authorities that are using in a way or another artificial intelligence, in their use cases, context scenarios, wherever in Europe, so in all member states, the act will be mandatory. And and this is very important, not only for private companies, but also

for the private companies that exist today. But also for the private companies that will exist in the future startups today, your startups in the near future, and public authorities and all the public, let’s say all the ITB behind the public authorities. And there is a big challenge. Huge challenge there. And I think that for the future, there was a question I a week ago in NLP was taking part. Somebody said, Well, you know, Europe is regulating, regulating regulating. So Europe is going to lag behind in artificial intelligence, as in many other topics, you know, as somebody very smart said, Yeah, you know, actually, the Europe’s position in artificial intelligence is really weak, or is weak, and is weak, because we are in the Wild West, where there are no rules. The bigger you are, the bigger the bigger the chances you have to grow up the business. And in Europe, we have a few big, large digital companies. So in fact, what the law and this act is going to, is going to, let’s say, make possible in Europe is that for investors, now investors will know that if they invest in something, and it’s AI driven, it will be future proof because the law is there, and the law is going to be there for many years, hopefully. So, in fact, this law should be a booster for investment or artificial intelligence in Europe. Because nowadays, some investors, they, you know, they they are a bit sceptic on I’m going to invest this so much in artificial entities today. Well, what happens tomorrow, if a law comes or something changes in Europe, the law is there. So from now on, there will be certainty for the investors and not uncertainty. That is the the, let’s say, the context where we have been living up so far.

Bart Farrell 12:31
Okay. Now, in in the other as you, as you mentioned, like, we even struggle sometimes about the definition of what is and what isn’t AI, in which cases might be, you know, tough to comply with the, you know, the recommendations, the guidelines that are that are in this all that’s coming out, in which case, it might be difficult to comply and why,

Joseba Laka 12:49
the definition of artificial intelligence in the act is coming from the OECD is another way to read it, because it’s a bit long, but there is a good definition. So by of course, it’s a high level definition. So there might be some grey areas, you know, what is AI? What is not AI? But anyway, and it’s quite with the definition, the, let’s say, the challenge is going to be for the high risk scenarios, therefore, the high risk use cases. As for example, in scenario, we work a lot on artificial intelligence, on industrial machines, on vehicles, on health systems, on the in the energy sector. In all the sectors where safety is a must. You know, my background, I’m coming from telecoms and in telecoms, I remember that, let’s say creating codes or creating systems, where staff was difficult, but was not such as expensive as it is, for example, to include software, in a plane in a vehicle in an industrial machine. Why? Because there are standards, standards that are really hard from a safety perspective. Now, what is going to happen is that if you are in one of these high risk domains, the cost of all the evaluation, conformity assessments of artificial intelligence is going to be quite high. And of course, that’s an issue. That’s a harder that’s a, let’s say, that’s a challenge for the return on investment or artificial intelligence in some of these domains. But it’s a must, some must, because if you want artificial intelligence to really explode and really be something really important in in industry 4.0 in the new mobility in the smart breed in many other sectors that are really demanding artificial intelligence, you can’t work with consumer level artificial intelligence. So to say you need some artificial intelligence based models that are safety proof and we all know that when something has to be safety proof, then the amount of you know the amount of a mechanism that you have to implement, make things much more expensive. But if later there’s an accident, for example, you need an AI that is not a black box. So you need AIs that can be explainable, trustworthy, and so on. So the real tenets is going to be there. For the unacceptable artificial intelligence, of course, it is going to be out of the law. So I mean, you won’t be able to do things that are quite common, for example, in China, or in other geographies. And then for the the ones that they are reports, transparency, it shouldn’t be really a big deal, because transparency is something that we all

demand from artificial intelligence in general. So it’s going to be all about high risk. And in this, let’s say, safety relevant domains, where safety is going to be number one, so nope, nobody will be working on an AI driven machine tool system, for example, if that artificial intelligence has not passed all these conformity assessments so that it can be actually used in the inner machine tool system.

Sylvain Kalache 16:08
So a lot of artists in the AI will you know, the entrepreneur or the software engineers, can you tell us a little more? How can they comply? Or maybe we can kind of shuffle the equation? And speak a little bit about how regulators and not regulator here is the country is enforcing the law? How will they assess the conformity of the products? Ie How is guided?

Joseba Laka 16:37
That’s a very good question. Because there are big questions from Mark about that there’s going to be an AI office in process, of course, actually, the law that’s has a very strong sanctions or to whoever doesn’t comply with the with the law. The law is meant to be applied state by state. So I’m talking from Spain, right now, in Spain, the government is setting up what is called the the AI supervision agency, I see it. And what we all expect is that a ICR, the agency will be soon dealing with the companies that do conformity assessment of many other technologies, because there are tonnes of standards for conformity assessment. And they will be started, they will start creating the protocols, the textbooks, the Sandbox is required to go through these conformity assessment processes and have the shield saying that, okay, your model can be applied in a high risk scenario. I’m not sure how it’s going in other countries, I have heard I’m reading, I’m getting information from colleagues Germany, France, Italy, and so on. But as you know, that the law, once you sign, I think it’s six months for that it will be applicable for everything that is unacceptable, then it will be 12 months for the hide risk. And I think 24 months for the limited risk. So it’s going to be go step by step. And in doctrine, during this month, what we expect is a rush to create all these probably very vertical sandboxes for a specific domains, so that you are going to be able to take your model there even take your data, everything that surrounds or makes operative your artificial intelligence, demonstrate that it complies with all the requirements in high risk or limited risk and then go to business.

Sylvain Kalache 18:35
So to the country need to be prepared to do this at a national level, how are convinced you are that they are going to be able to implement this in a quick way. Right. Like, obviously, intelligence and especially with the progress we’ve made in the last year or so has been so impressive that you know, and how can you expect, you know, country like country at a national level to be able to to develop this surface, the gated the sandbox, do you just think it’s gonna be something easy? Or do you think that it’s going to be challenging for?

Joseba Laka 19:11
Well, first of all, is where to be business, there’s going to be business for there are many companies in Europe worldwide that are doing evolution assessment for many, many am testing and so on of, you know, getting a certain level that gives you value in the markets is also a business itself. So um, this is my opinion, I think that this is going to be a big business for these kind of companies. And I’m sure also from many comments, I’m hearing from my colleagues working in this standardisation committees, that it might be a there’s a bit of a scepticism about the the, the milestones because some of my colleagues they believe that it will be very difficult to change certain safety related standards. By the time that the AI Act should be should go really life, it’s quite easier to say, Okay, this is bad, you can do, you know, massive biometric or social scoring and these kind of things, you can do that update that you see, but to, to go through a process where to demonstrate that your model and your corporate operative artificial intelligence could be embedded in an industrial domain in a mobility domain in a health domain. That’s difficult. So what some people think, is that there might be a delay on the application on these 12 months in high risk. Having said this, the milestones are still there. So it might very well happen that in two or three years, we are still 99% in that process.

Sylvain Kalache 20:55
Okay. So that’s kind of the secret is that countries may not, right, like we might not be

there, the the laws, the regulations, the framework is here, the technical implementation is yet to be seen. I

Joseba Laka 21:08
mean, we know very well how difficult it is to get some IC or some do, or some nice, awesome, it’s quite difficult sometimes to demonstrate your digital product that really is a safety aware, and so on all these these kind of things. So we believe that is going to be the biggest challenge now is going to have everything set and ready. So that the database a Okay, now you have to go through this sandbox to get your certification, that everything will be ready. Let’s see, let’s

Bart Farrell 21:40
regarding you know, with every challenge, there are also opportunities, you know, that would the situation do we have right now that AI is pretty much, you know, currently very much sell black box? Do you think that there will be opportunities for businesses that will be able to help with transparency with AI models? And are you aware of any companies as of right now, um, that you might be able to mention that are doing that.

Joseba Laka 22:03
What I’m able to say is that trustworthiness is something that a few months ago was seen as, with very limited market value. So you know, I have a black box, it works very well, fantastic. You know, this is I can trust the black box. But now, the white box concept and the real trustworthiness is becoming market assets, something that companies start thinking, Okay, it’s not enough with having this blackbox AI, or whatever is the architecture, I don’t care. That actually works fine. And this makes these predictions or this prescriptions or whatever, or generates on it, whatever. Now, now I need to audit, I need to have a full traceability. If something goes wrong, I need to demonstrate book. And if I don’t, then I won’t get the label, and I won’t be able to go to the market. So suddenly trustworthiness is a topic that it has to be like, quite academic, quite like nice to have trustworthy AI. And there’s going to be a one of the most market relevant issues, that one and others that so far have not been so relevant, like energy consumption, because energy consumption is also there in the in the law. The idea is also is that in mothers, I mean, we are all talking about billions and billions of parameters and this jig about some soul, no, no, stop. I mean, also No, well, this, they will have to be a coefficient. Otherwise, you might have problems. And so there are many topics that so far, they have been kind of nice to have. And now they are going to be you must have it. And I can tell you that this is something now on the top of the head of many CEOs or CTOs, in many large and medium sized private companies.

Bart Farrell 23:51
You know, we’ve been talking a fair amount about, you know, companies that you hear in the context of Europe, obviously, this being a, you know, framework and legislation is coming out of the EU. But if we’re talking about, you know, a lot of the big players in the AI ecosystem are very much focused in the US, you know, based in the US How likely is it that they will, that they are to comply you know, if we look at other things that we mentioned previously, about GDPR we’ve seen other things around data protection the United States, but when it comes to this with AI, what do you expect will be the reaction from American companies?

Joseba Laka 24:24
Both how to how to say, I wish I knew very well but see what happened with GDPR actually, now is the worldwide standard for data privacy. I mean, you’re not going to build something that the okay this is the European flavour, this is the American flavour, this is the Chinese flavour, you have one flavour and, and I think and I hope that a what I was mentioned before about trustworthiness, explainability, traceability auditability, all these kind of things are going to be the the new value propositions of many artificial intelligence based systems. So if you are not able to demonstrate that a A how many what’s going to be your value proposition in domains like health where a decision is so critical? So to say, this is cancer, this is not cancer, or, you know, all these kind of applications that they seem to be, like magic, a black box says no, no, sorry, I mean, you will have to actually demonstrate. So I think that’s a, I hope that many companies will see that there is a huge return on investment on going this way their hardware, if you

go the hardware, you have to develop, you have all the markets, if you don’t go the hard way, you are out of the out of the market in Europe, but there will be many companies building at the European, let’s say style. And those ones, they will have an apprentice in the market. So I think the smart movement will be to say, okay, like GDPR, this is it, and we go for these kind of requirements, no matter if I’m selling in the US or I’m selling in, in in France fair, it

Bart Farrell 25:54
knows if Airbyte If anybody, you know, likes to do things differently, it’s definitely the United States when it comes to legislation coming from other countries. By the way,

Joseba Laka 26:02
by the way, the US is huge on defence, and I have to say, because some people are really, they really get crazy when the the Ag Act doesn’t apply to any defence, AI applications or security. You know, I mean, there are certain topics that adults will have to authorise the police, for example, to use a surveillance systems using biometric remote biometric in real time, it will be yet, but defence another very critical, let’s say aspects where the US is in the leading it’s out of the loop. So the European flavour will be the same one as in the US.

Bart Farrell 26:41
And, you know, in in terms of practical steps that, you know, companies can take today, in order to be better prepared, if you had to recommend a few concrete things. What would you say that, you know, if you’re, if you’re a company has worked with AI, and that’s creating a create technologies that are based on AI, there are some core things that they simply can’t miss, what would those be,

Joseba Laka 27:01
I think is to invest on the technologies that are available, the choices that provide some guarantees on I mean, warranties that you are not a I mean that you’re not following a path that is not under your control, there is always this thing buy or build, you know, buy or build. And I’ve many times you buy because it’s cheaper is as a service, wherever wherever, I think there’s going to be a big shift now in the in the coming years, more builds. Because companies and we’re feeling that already companies, they feel that if they go there as a service, whale design, subscription model and so on, they are going to lose the core value of the artificial intelligence solution. But not only the core value, they’re going to lose the control the certainty that they will be valid once the law is in place. Because there are so many questions not right now we have the law is not we don’t have the final text yet, by the way, but we must know 99 99%. But I think in from private companies prospective the ones that are demanding artificial intelligence based solutions, they there is a shift towards built the most most of it. And the ones that are actually selling a artificial intelligence based value propositions. Or my suggestion would be try to build something that is going to be future proof, 567 years. So you have to think very well what is your market domain, if you are going to be in the high risk market context? Or if you might be there, if you are going to be there? I mean, you have to go for it. I mean, and the better is like cybersecurity. Many years ago, cybersecurity was like the last thing, okay, you build the system, oh now firewalls and so on. No, no, no security by design. And now everybody’s working security by design, you don’t design and that add security here is meant to be the same, you have to comply by design. So it’s not just a matter of I take whatever thing is out there, I put it and fantastic eight sorry, companies is not going to comply. So you will have to rebuild your job complete technical stack. So choose and add to the stack and try to build as much as you can, and try to control as much as you can, at least for the coming years. Now you are not going to be in high risk, then I think that’s a then yeah, you have realised many as a service value propositions and you can work on that because you know that all these ones, the big players that they are selling, I’m not going to say which kind of platforms and so on, but a transparency is something that it will be applicable And it shouldn’t be very difficult for this as a service providers to provide you with this transparency as a service.

Sylvain Kalache 30:09
Yeah, well, and I think the, you know, the compliance by design in the

fact that we, the regulator, we push vendor and company consuming AI to be able to remove this black box aspect is going to help the industry to grow in and to define standout benefit, right, it’s gonna be beneficial, it’s not, I think that the fact that today, generative AI and the model we are building, our Black Box is definitely an issue because we we don’t really understand them. And as you mentioned in, in critical, our life threatening situation, this is absolutely unacceptable. So, totally, before we wrap this episode, we are just at the beginning of this, as you say the it’s it hasn’t been signed yet. But based on your experience, you know, in the field, like, what, what do you believe we can expect from this European AI Act in the future? Or like, how do you believe it may evolve?

Joseba Laka 31:18
It’s a very good question, because it has taken several years to have this law written and agreed, we all hope that it will last many, many years. But actually, there are on the table right now, there are some issues already with it with a regulation is, for example, in the in the domain, sorry, in the health domain, we have a medical device, regulation MDR, the MDR state that all software is also a medical device. So now or artificial intelligence is going to be a medical device. And there are aspects on MDR, for example, that, let’s say limit a lot the kind of artificial intelligence you can use. Because the the artificial intelligence used, it has to be absolutely, let’s say trust is not the issue is certain, I mean, you gotta play certain techniques that are out there in the market, like about learning, you know, a continuous learning on artificial intelligence models. So there are certain techniques that are simply not allowed in MDR. So now where we do have a act, coexisting with other regulations that were already in place, that they might need to evolution to be like a puzzle that everything fits together. So from the regulation, revolutionary perspective, there are going to be like sick doubts after the revolution revolution is in place. And we’re a bit I would say, some sometimes pessimistic sometimes optimistic about how these shutdowns will really work out that how the puzzle really everything will fit together. But what will we really expect this, that the rotation will be in place and will be stable for many years, many, many years to go. Because we believe that it has been built, really future proof. But of course, AGI could arise the artificial general intelligence and we might have a massive, massive new revolution with that. Are the you know, the if you ask a people about when AGI will come? Some say tomorrow, some say never So, who knows. But what we really expect is that now, investment in artificial intelligence in Europe, we have kind of a friendly, low challenging look, I mean, sanity, but not unfriendly, because the law is going to set up the, the, let’s say the the business scenario for many years. So now to make an investment will have this assurance that the law is going to protect whoever really complies with the law, and is not going to permit anybody that is not compliant with the law to operate in Europe. So it’s not going to be see next year’s are going to be a lot about issues, challenges, problems, ambiguities, when in one member states, they are saying this in the other they have done a different interpretation of you need to harmonise everything. So it’s going to be tough. Next year’s will be tough. And for everybody like us working on R&D, artificial intelligence, that we’re working on medium long term technologies and projects and very exploratory themes. We are already suffering some of these issues of the next coming years. Because for example, we have many projects that are about a biometric. biometry and artificial intelligence mixed so that we can predict how somebody is driving a car is working with a machine or is doing wherever and that’s recognising emotions. That’s forbidden the law unless Unless Unless. So you will have these act, you have others. So it’s going to be far really, really fun. It’s

Bart Farrell 35:17
certainly not going to be boring, that’s for sure. No, definitely not. Well, Joseba, thank you very much for joining us today. I think there’s so much that we’re taking from this conversation regarding how we got to this point, the approach that’s being taken the fact of just a very definitions around these things about you know, what is AI and what isn’t, and, and the state by state approach of national regulations that may be in place in order to help organisations that are on the ground, there are practical things that need to be kept in mind. And that once again

, this is a challenge. But there are also a lot of opportunities to be explored here. So thank you so much for your time today. And we look forward to collaborate with you in the future.

Joseba Laka 35:52
My pleasure, thank you very much. And for anybody that is listening, the podcast. Stay tuned, because things are moving very quickly. And I’m sure that there will be some very, there will be news about the application of the act in the coming years, I’m sure. Yeah, we’ll have

Bart Farrell 36:10
plenty of reasons to have you back on for another podcast. If people want to get in touch with you. What’s the best way to do so? Twitter,

Joseba Laka 36:16
LinkedIn, email, I mean, it’s quite easy to find me. So if you look on social networks, by the way these should Okay.

Bart Farrell 36:24
Perfect. Great. Well, thank you very much. All right. Take care. Cheers. Bye