Transformation in Trials

Revolutionizing Neurological Care: Andreas Forsland and Gregg Johns on Assistive Technology and Brain-Computer Interfaces

Sam Parnell & Ivanna Rosendal Season 6 Episode 2

Send us a text

In this episode we dive into Brain-Computer Interface (BCI) technology, where the brain's faint signals are harnessed to create life-changing applications. Andreas Forsland and Greg Johns emphasize the importance of community-driven innovation, guided by the Brainiac Council—a unique assembly of patients, caregivers, and experts. This collaborative approach ensures that the solutions developed address real-world challenges, improving communication and independence for those with severe impairments. The discussion also highlights a groundbreaking clinical trial with the University of Waterloo, showcasing how AI-enhanced devices can reduce social isolation and enhance quality of life.

We also cover:

• Importance of user feedback in product design 
• Communication technology for ALS, Parkinson's, and more 
• The regulatory challenges of medical device development 
• Integrating AI to improve user experience and interaction 
• Future vision for accessible health technology 
• Ethical considerations surrounding brain-computer interfaces

Guests:
Andreas Forsland: https://www.linkedin.com/in/andreasforsland/
Gregg Johns: https://www.linkedin.com/in/gregg-johns-product-leader/ 


________
Reach out to Ivanna Rosendal

Join the conversation on our LinkedIn page

Speaker 1:

Welcome to another episode of Transformation in Trials. I'm your host, Ivana Rosendahl. In this podcast, we explore how clinical trials are currently transforming so we can identify trends that can be further accelerated. We want to ensure that no patient has to wait for treatment and we get drugs to them as quickly as possible. Welcome to another episode of Transformation in Trials. Today, I have two guests in the studio with me and we're going to be talking about the topic of designing wearables with the end user in mind. And, Andreas, would you introduce yourself for us first?

Speaker 2:

Absolutely, ivana. My name is Andreas Forsland. I'm the founder and CEO of Cognition. Cognition is a wearable brain-computer interface company. We use EEG technology integrated with our own proprietary augmented reality headset and we build applications on the system for various neurological conditions.

Speaker 2:

A little bit about how I got into this world. Prior to starting Cognition, I was in the field of user experience design, customer experience design, branding. I used to work at Philips Electronics and I worked across consumer electronics, healthcare, lighting and semiconductors, and I've had just a long history in UX design through multiple evolutions of the internet, from the first generation of the internet, second and third, and now we're kind of developing really the next gen version of sort of the brain body. You know web, you know symbiosis, if you will. So yeah, so I'm very excited about this. But a personal situation got me into the field of neurology and assistive tech. When my mom was sick with pneumonia, she was intubated and unable to speak, and so that was my first time realizing how precious communication is and human independence and agency, and how quickly it can be stripped away with something seemingly so common as pneumonia. And then I learned a lot about many other chronic conditions that many millions of people are living with, and that was the real impetus for creating the company.

Speaker 1:

Wow, that's a very strong personal reason to get started.

Speaker 2:

Yeah.

Speaker 1:

Greg, would you mind introducing yourself too?

Speaker 3:

Oh sure, yeah, it's not at all a tough act to follow, no worries. Yeah, greg Johns, I'm the VP of Engineering for Cognition. I've spent most of my career in medical devices, in particular spine surgery. Similar to Andreas, I also got into medical devices for personal reasons. When I was an undergraduate student.

Speaker 3:

I had a close family member get quite sick, and I grew up in a remote community where the quality of health care is, let's say not where it should be, where the quality of healthcare is let's say not where it should be, and for various reasons, you know, I realized there's this huge gap in servicing, you know, or providing the same quality of healthcare for individuals that do live in communities like that one, and so for my capstone project, we created this sort of wireless patient monitoring system with the intention of sort of delivering a greater customer care or patient care in these communities, and so actually that's how I landed my first job in medical devices and brought me to spine surgery and actually connected me with cognition some years ago, and that's how I kind of ended up here. So it's interesting how that works out.

Speaker 1:

Yeah, there's always some sort of interesting journey that leads to why people currently do what they do, especially in the broader health industry. But, andreas, you mentioned the ability to communicate was one of the reasons why you found that there was a problem that needed addressing, but I know that what your product addresses is actually all sorts of different problems. Can you tell us more about what it was that you set out to improve in the first place?

Speaker 2:

Well, yeah, so if you go back to my mom's story, she was intubated and restrained in a bed in an intensive care unit, you know, with life support on, and so she was intermittently sedate and alert and conscious and she knew she had an intention. Right, she had an intention of something, wanted to say something, or she wanted to request that the lights are turned down or that she'd be rolled over because she's in a in an uncomfortable position or her nose might itch or these kinds of things, right? So the the need to be able to communicate, the need to be able to control your environment, the need to be able to adjust your positioning or or even move around and change your perspective and say like I don't want to look at that wall anymore, I want to look at the window and see the birds outside. So I understood deeply how badly she wanted to do those things and how unbelievably impossible it was for her to do that herself. And then when you realize how many people in the world have had a stroke or have ALS or Parkinson's or traumatic brain injuries or spinal cord injuries, and they're all living in those situations all day, every day, and so that again, that was kind of the inspiration, but the thinking about what is the most primary, you know, the most primal thing that you could do to really have independence could do to really have independence, and we realized that communication access was the most basic thing that could allow someone to sort of climb out of their body or climb out of a depression, because they could then be heard, they at least have expressed themselves and they've at least been able to be understood and hopefully, what they were able to communicate could affect something in the real world, whether that's the type of care they're getting from a human being, or whether that's like an IoT smart sensor in the room does a thing and responds to me and it does the effect.

Speaker 2:

So, early on, it was like what would be the best way to solve that right? And so we thought about well, obviously everyone has a, a brain, and you know, not everybody can use their hands and not everybody you know. So you sort of you sort of just through deduction say what are the right kinds of sensors, what are the right kinds of interfaces from a design perspective that someone would need to be able to access easily? And then what's interesting is you think about how quickly or how difficult it is to learn to use new software and from a user experience perspective, it's like, how do you make it feel like it's so intuitive that the first time you use the software it feels natural, it just feels like it was made for you and personal and so, and then as you use it, it just almost like the technology feels like it disappears and you just do what you're doing, whether that's communicating or controlling the room and these kinds of things.

Speaker 2:

So that was really the vision was to be able to give someone just a very hyper natural, adaptive interface and with the first application to start was like communication, and so that's what we really endeavored to do was to create a wearable headset that would allow someone to have a holographic display on a clear lens with like a keyboard or words or phrases that they could pick from, and they could do it either with eye tracking or with their mind, by mentally paying attention to things and composing thoughts. And the headset had to have speakers on it. So we sort of worked out through the user requirements of someone having a wearable speech generating device that if, if someone had a wearable speech generating device that really felt like it was a replacement for their mouth. You know what would the, what would the technical requirements have to be to satisfy and trick the user's mind into believing that it's an extension of their body, like a bionic, neurobionic?

Speaker 2:

And in order to do that, we realized early it had to be a closed loop system, it had to operate very accurately, it had to be very fast, it had to be very easy to use. So it was a very, very difficult, tall list of user requirements that we had to overcome to create something like that as a communication solution. But to your point, it's like but now that we've created, if you strip away the use case of communication and you're left with the core system, that says wow, you've created this wearable immersive AR headset with sensors that track eyes and head, motion and brain activity, and it processes data very fast and you have wireless tools for accessing that data and you can. You know what else could you do with it? And that's when we realized that you could generalize that kind of architecture with that kind of a adaptable wearable platform to unlock all kinds of applications, right? So thinking about measurement of brain function, measurement of brain dysfunction, mental focus, alertness, there's like so many things that we could start to measure with a system like that.

Speaker 1:

So you started with the speech being the focus and then you actually expanded based on what you created, on applications, other health applications, beyond just being able to communicate.

Speaker 3:

So what we realized in the art of sort of making this speech generating device was that there's actually this huge market gap Right, and, as you're probably painfully aware of, it's extremely expensive and time consuming to bring devices drugs, really anything medical to market. And so when we started to do the speech generation, we really looked at well, what's the cost of doing this, what is it going to take, how do we get it into people's hands? And we realized that, you know, a very, very small percentage of medical devices that are novel actually go from non-clinical research to market right, and in fact, if you look at the published literature, it's something like 14% for clinical devices.

Speaker 3:

So very, very small.

Speaker 3:

So if you start to think about, you know all the different applications that Andreas is talking about and you start to think, well, how many of those are actually going to make it to the end right? And not all of them fail because they don't have clinical significance or clinical utility. Sometimes it's because agencies can't navigate the regulatory domain or they don't have the technical skills or the technology doesn't exist. And so, as Andreas pointed out, we have this technology that we've built for a very specific use case, but it can be generalized to this sort of emerging pattern that you see specifically in neurology in the past five years, where you're seeing these really, really specific therapeutics and diagnostics treating things like ADHD, parkinson's, traumatic brain injury, dementia, parkinson's traumatic brain injury, but all the sorry dementia, yes, but all of them are very, very centered on this sort of like BCI hardware and most of the clearances you see are sort of software clearances. So it demands, you know, this sort of general purpose platform that's highly focused on neurology, brain health, this closed feedback system with the human in loop.

Speaker 1:

Could you tell us more about BCI? What does that stand for, but also what does it mean?

Speaker 2:

You want to jump into that, Greg?

Speaker 3:

Yeah, I can do that one. Yeah, so BCI is brain-computer interface, and it's a fun question actually, because there's not really a universal definition for BCI and, in fact, many technologies that are already on the market could, could probably be considered BCI if you started to really think about it. About it is it is some type of bi-directional interaction with the brain, right? Whatever that might be right. So trying to drive an actuator with your brain, getting feedback about your brain, providing diagnostics about the brain, that's sort of how I am in picture it, imagine it.

Speaker 2:

rather, I don't know, andreas, if you have anything you want to add to that specifically yeah, I mean there's a lot of work being done in various sort of pockets or communities trying to to deal with the nomenclature and the definitions. But yeah, it's. I think the current state is, you know, it's some kind of sensor that sits on the surface of the brain or non-invasively near the brain or on the surface of the skin. But typically a brain computer interface is focused on the central nervous system, the brain itself, as opposed to the peripheral nervous system, like. So it's like watches are not, you know, watches with sensors are not BCI.

Speaker 2:

These are sensors that are on the head, on the head, on the head or in the head, picking up really, really faint but very fast moving electrical or blood flow or magnetic frequency changes on the surface of the brain.

Speaker 2:

And so imagine, like you might have a small little imagine you have like a sensor that's just picking up like an on off state, like it's like a light bulb going off and you see electrical current activity and then you see you don't see any activity and that's like a switch it turns, it turns something on and off.

Speaker 2:

Or you might have an array of a thousand little tiny, tiny, tiny little surface electrodes that are on the surface of the brain and they're picking up like a whole mosaic of brain activity across many neurons and so those. There are sensors, and but the sensors are sending the data wirelessly, or sometimes wired, out of the brain, out of the head and out of the body, to a computer, and so that's the brain computer interface, right, right, so you're sending data out of the brain to a computer and sometimes, depending on the nature of the sensor and the nature of the software, you're also not just sending data off the brain to a computer for an application to run, but you might also be running an application that sends data back to the brain, right, to send stimulation to the brain, et cetera. So when Greg's saying it's a bi-directional, it's like it's like it's like you're reading and writing to a biological hard drive, right?

Speaker 1:

Oh, that's, that's fascinating. So, and we've talked a little bit about some of the applications and the aspirations for your device, but I one of the things I found interesting is how you actually develop it with this very close collaboration that you have with your users and your communities. Can you tell us more about which communities that you interact with and how they contribute to your products?

Speaker 2:

I mean Greg can talk more specifically about some of the actual studies that are going on right now, but you know so when we started the company about almost 10 years ago, we we focused on the problem and focused on community activation first, before we got into the science and the technology Right. So we were very much we're very different than any other BCI company, so all the other BCI companies. They started with the brain and the sensor and the aspiration to create a very high resolution, high speed sensor. Our focus was what problems are people dealing with and and what are the largest populations that are dealing with common problems like speech or mobility and these kinds of things, and how do we work backwards? So early on we developed what we call the brainiac council, and the brainiac council is, you know, in other words, a user advisory council, and so we now have over 200 people in this Brainiac Council that are representatives from all over the world, heavily dominated in North America, but these are individuals that include patients, they include their caregivers and they include professionals. So we have neurologists, neurosurgeons, ophthalmologists, slps, which is a speech language pathologist, ots, pts, slps, which is a speech language pathologist, ots, pts, abas, which are behavioral therapists, psychiatrists, neurologists I think I might have mentioned on the professional side. And then we have specific conditions that are representative in our community. We have traumatic brain injuries, strokes, overdose, als, parkinson's, dementia, ms, cerebral palsy. So we have a lot of different conditions that are represented in our community.

Speaker 2:

Because we felt that when we started getting into the solution finding, we said, well, there's got to be solutions that are already existing out there to solve some of these communication problems or mobility problems. But what we found is that there were a lot of well-meaning, small, like mom and pop kind of technologies. And then there were really big companies like Medtronic and Stryker, and most of the big companies were focused on really big billion-dollar markets and most of the small companies were focused on one specific condition. So they were experts in cerebral palsy or they're experts in Rett syndrome or experts in autism, and what we realized was there was not a platform that could satisfy the needs of multiple conditions and look at what the common denominators are of shared user requirements.

Speaker 2:

Right, and then, if you so that's really where we started we said, like, how do we build a big database internally as a knowledge base around hundreds of clinical conditions and whether a BCI is appropriate or not and whether they would be comfortable wearing it. Whether you know, you know, and so we've created this internal knowledge base that is very, very deep and broad. Based on this user advisory council work and activating multiple feasibility tests, focus groups, surveys, we do periodic Zoom calls with an international cohort of these folks and just get feedback on videos on renderings of systems. We have people in the community that try on our technology and we go into the field, into their homes, to talk to them and interview them and have them try our technology and do human factors studies, and now we're entering the period where we're actually running our own clinical studies. We actually have a clinical study that was just published on clinicalstudiesgov.

Speaker 1:

Ooh, congratulations.

Speaker 2:

Thank you. Yeah, maybe Greg, if you want to talk about that. But yeah, we're doing one now. And then we have a number of our healthcare system customers are teeing up their own IRB studies for multiple SAMD applications. Samd is software as a medical device.

Speaker 1:

So, before we do, I do want to learn more about this study. But before we dive into that, I would be curious based on this big body of knowledge that you have gathered from different user types, what have you found about similarities versus differences in their needs and how they relate to the mission that you set out to solve?

Speaker 2:

Well, the interesting thing is there are a handful of things that can generalize across these populations and the good news is most of that's in hardware. I shouldn't say the bad news, but the complexity, right, the complications. And the complexity is actually in the desire, the dream, to personalize the software. So how do you adapt the individual? So a very specific example right, so you take someone who has ALS, late stage ALS, and they can't even move their eyes, ok, but then you have a display in front of you with graphics and you have to arrange the graphics in the display in such a way that they can read the text labels on each of these buttons that are in front of them. Right, so you have certain constraints around laying out an interface for someone with ALS so that they can see everything, and if they can see it, then they can interact with it.

Speaker 2:

Well, now imagine the exact same hardware is being worn by someone with cerebral palsy who has very spastic muscular tone and movement, and they're in a wheelchair and they're moving all around uncontrollably. What are you going to do with that interface? Are you going to lock the interface to their body or are you going to let the interface kind of float in virtual space? Does the interface follow them around as they move around in space? Right, so you think about all of these real world conditions, about someone becoming like motion sickness, or do they? Can they see everything within their field of view? Do they have some cortical visual impairment, the cvi, that could impair their ability to see like half the screen or the whole screen or the top or the bottom?

Speaker 2:

So there's a tremendous amount of personalization required to each and every single user, right, because you have many comorbidities and a lot of these kind of severe cases, and so that's the beauty and like, if you understand all of that, thinking about your eyes, your ears, your proprioception, temperature, weight, positioning, cognitive abilities, you know, hearing, you know, so you think about all your senses. Can you have settings, can you have vectors or variables that are in the user interface? That can be very, very quickly. The problem is not just having a bunch of settings to personalize like a long menu, but how can you start to use ai or other software algorithms to allow someone that's non-technical to personalize an experience for a user? And that's where it's a really beautiful opportunity for this convergence of you know, very thoughtfully designed hardware that can be worn by anyone with software that can be deeply personalized to the individual, both in their user experience but also in the functionality, using statistical models, traditional software development techniques and user experience stuff, plus machine learning and AI.

Speaker 1:

That is very interesting, Greg. Maybe you could tell us more about this clinical trial.

Speaker 3:

Sure, yeah, I would love to. So the clinical trials focused on our sort of original mission the speech generating device but I would say it's the device we're evaluating is head and shoulders above what's been on the market to date by quite a large margin, in my opinion. So it's a state, it's run in conjunction with the University of Waterloo here in Canada and we are doing a rolling enrollment, so we're adding new people as we sort of expand, which is exciting. We think it's going to be quite large, but the idea is to evaluate a speech generating device based on BCI, and so we use a type of visual evoke potential, so a visual stimulus that you would fix your gaze on through concentration or focus rather than eye pointing, and that would allow you to compose what we call an utterance right. So the sort of natural thinking would be that you would be spelling out a word, like on a keyboard right, and that's sort of like the natural progression or how you might imagine a keyboard translating to bci.

Speaker 3:

Well, for the individuals that we are are building this technology for which is in, in many ways, all of us um, that can be quite taxing, um, particularly when you've got severe motor impairments or you've got cognitive issues. Spelling out words can be time consuming, and the repercussion of that is it leads to a high degree of social isolation, right, because the dynamics of conversation, live conversation don't really sort of stop while someone is spelling out a word, right, it's not a natural way of communicating, and so the folks that have these devices feel quite isolated, right. And so what we've done is we've used generative AI, particularly large language models, in conjunction with personalizing that vocabulary or the sort of body of knowledge that that LLM has, in combination with traditional speech generating technologies and BCI, to essentially maximize the, I should say actually minimize, minimize the effort to convey some, some like sensical intention or utterance.

Speaker 1:

Wow, that's very exciting. I'll be very interested to see what the results end up with being.

Speaker 3:

Yeah, it's, I mean it's, you know it's. It's interesting because, as you probably are aware, the FDA is pretty slow to adopt new technologies. You know, I think I think it was one of the first, first people to submit a 510 K with Bluetooth in the operating room. So you know, bluetooth has been around for quite a while. But you know they, they are only released to get on the AI train. I would say, right, like recent guidance documents are being published, but they're still sort of behind the curve from the industry perspective. I think the last guidance document they published was on neural networks like multi-layer percentiles, which is, you know, sort of old news at this point, right, which is, you know, sort of old news at this point right, especially, as you know, chat GPT is sort of prolific now. Everyone's kind of aware of what a large language model is, just by virtue of that.

Speaker 3:

But what's sort of exciting for us truly about that is the advent of AI and how prolific it is and how much computing power we have now in such small form factors has sort of changed the BCI landscape quite significantly, right, if you look at what's happening in the industry now with companies like Neuralink right, that are focused very much on this implantable BCI. Well, the whole premise there is they're trying to get past the skull, the tissue, the hair, all these things that sort of impede the signal, because traditionally non-invasive EEG, which is the sensing modality for BCI primarily, is considered to be quite poor, poor quality, unreliable ML and statistical signal processing have advanced and so has the processing power. It's sort of unlocked all of these different opportunities that historically were quite challenging and I think that's a big reason why you're starting to see this trend with regulatory filings for these EEG-based therapies and diagnostics, because of the, the, the availability and understanding of of AI and ML and you know, cognition is sort of at the forefront of that and speech generating, which is is really, really exciting.

Speaker 1:

That is exciting. I am interested because it sounds like it's both about the hardware, which you have excelled at, and also the personalization of the hardware, but it's also about the software and how it manages to either diagnose or potentially even intervene with the patient's conditions. So what is the most important piece?

Speaker 2:

What is the most important piece, ivana, would you sometimes like we're actually doing a lot of effort to explain it audibly? We actually have. I have a video here that I could show you that shows two different users, one who had a stroke and one who is locked in with ALS, and you could see how both of them are using the exact same system, but the software is personalized to them and their utterances are radically different because the deeply personalized AI models are like a digital twin of the individual. So you think about, like how you speak and how you grew up, and your personal language is different than mine, is different than Greg. Our life experiences are different. So how do you build that history of life experience and your vernacular with my vernacular, but it's the same system across the board. So let me just I don't know if your viewers will appreciate this, but this gives you a tease and again, this is just to show you. So this first slide, this shows you what I was trying to explain audibly.

Speaker 2:

We have large council of brainiacs over 200. The picture in the middle is a is a visual of our headset called the Axon R, and you can see it's a. It's a augmented reality headset that has a tinted lens in front of it, so you still see the whole world around you. You still see the person's face, so that you're not closed in, and you can see some EEG sensors that are positioned in the rear straps of the headset, which is oriented towards the visual cortex of the brain. So we're processing, sort of we can almost see what you see through your EEG on the visual cortex as a way of thinking about it. And then on the right we have an AI conversational application which you'll get a sense of how it works.

Speaker 2:

But this will show you an example of someone with a stroke communicating with their communication partner, and then another person who has ALS who's also communicating with friends and relatives around them using the system. And again, this is just a demonstration from a research perspective. You know it's not commercial and it's not, you know, clinically available as a medical device.

Speaker 1:

Yet Okay, let's give it a try.

Speaker 2:

We call it assisted reality. Can you see this?

Speaker 1:

I see, see it, but I don't hear anything oh, it's not audio yet what do you think about um asparagus?

Speaker 2:

so she's speaking and the headset is picking up what she's saying and it's transcribing it into her headset. Just a combination of saltine vegetables on the side, and then that is driving his AI engine to generate options for him to choose. Well, brussels sprouts would go with paella. You can tell he's sarcastic and this person is a rabbi, okay.

Speaker 1:

I try to see the good in every situation.

Speaker 2:

Wow, nice.

Speaker 1:

It's a blessing. It helps me communicate. Oh, that's really cool.

Speaker 2:

Could you ask him if he feels claustrophobic or if he seems not wearing this?

Speaker 3:

for long periods of time. Hey, this is incredible.

Speaker 2:

God bless you Not at all.

Speaker 1:

I feel quite comfortable in the device. Wow, we got a manual confirmation. Is that what you meant to say? No, no, but you can say yes. Model time.

Speaker 2:

Wow, and this is what, generally, what it looks like that they're using. So they're mentally paying attention to these elements and their brain is selecting. Their brain is selecting these buttons and they're able to generate words and they're able to generate words, and then they can generate situationally appropriate phrases, and then they pick which phrase.

Speaker 3:

And then they can either speak it or, in the future, send it to Alexa or Jazz music is my favorite, I love the smooth vibes.

Speaker 2:

Alexa or jazz music is my favorite, I love the smooth vibes. So so this kind of interface moves from hunting and pecking through very fatiguing sort of keyboard clicks into conversational speed 30 to 40 words per minute rate, uh, where you can actually have conversations like you saw in the video. This is it's a breakthrough, uh, in human computer interaction, but then it's also just the beginning.

Speaker 1:

Yeah, and this is a non-invasive device completely right 100% non-invasive Right. Wow, this is impressive. But going back to, how do you make sure that it works for so many applications but also across different needs of the users? You were mentioning patients who have a lot of movements or patients who, on the other hand side, are bedridden. How can you ensure that it works for the different scenarios are?

Speaker 2:

bedridden. How can you ensure that it works for the different scenarios? Well, I think what I was saying earlier is we've done a lot of informal usability testing, a lot of surveys, qualitative research, feedback, primary research. We go into their homes to understand, sort of generally, what do they try and do, where do they spend most of their time in surveys and other kinds of things. Then we also get feedback from their caregivers and family members that spend a lot of time with them to understand their preferences. So we get a lot of we get almost too much feedback right, because it becomes and some of these user requirements become paradoxical, because some of them contradict the other, you know, and then so we have to make a judgment call on is there a way to satisfy both needs, even though these are completely opposite?

Speaker 3:

I think the other thing, andreas as well. You know, if we think about what the sort of mission is now right, which has evolved a little bit from what is speech generation as the initial kind of entry into the market or desired entry into the market, you know we're now trying to sort of democratize BCI right and make it affordable and cheap cheaper for medical device companies, healthcare systems, private entities to bring these technologies to market. And you know we recognize that there's a really wide range of different people that could potentially be using this, from from otherwise healthy, healthy patients to to people like the rabbi. So the way that we've sort of approached it, at least from a technical perspective, is modularizing it, not just in the technology but also in the way we intend to clear the components so that maybe not every single component applies to every use case, but if 70% of the platform applies to, that's still potentially a huge value add right, big savings. And so the regulatory strategy and how we're thinking about that from a platform perspective is critical to the overall business model.

Speaker 3:

You know, I think in addition to that, the way that we sort of structured our engagement with customers, we sort of deeply integrate with them, a lot of them, in particular healthcare systems, which in many ways is a user of the platform, right, they're often the forgotten user, right, because we don't often think of administrators or you know, or IT professionals or decision makers as sort of users of the system. But we do integrate with them so that we can consider things like economic impact early. Right, because that's another factor that often kills medical devices is healthcare providers. Healthcare systems can't really assess whether or not it's going to have a clinical benefit, right, whether it's worth the investment, or they try to buy it and and it fails. The integration fails because it's not interoperable or there's issues integrating it to hospital systems, right.

Speaker 3:

So that type of integration with those customers also is important, an important part, not just sort of the the end patient, but also thinking about the whole ecosystem and the whole life cycle of a medical device, right, from sort of early research, which is where our platform is mostly focused on right now, all the way to commercialization and actually adopting it into a healthcare system, which are very different problems. But the way we've built the platform is it's modular enough that it reduces that burden, helps people get across the finish line, reduces risks, particularly for healthcare systems that have innovation centers right, because they basically just serially invest in devices that probably won't make it to market, right. So that's quite expensive. And there's, I believe, 66 healthcare systems in the US that have innovation centers, so it's not a small number. Yeah, so we do a lot, I guess, is the short answer.

Speaker 2:

Well, and also Ivana, just if I can add to that. It's just because our orientation has always been outcome driven, not science driven. So we're always about like can we make an impact, how big of an impact can we make? And we're always working backwards from that impact statement and what we realize is that most medical device companies, whether it's hardware or software or some mix of that with cloud computing or whatever, most of them are thinking about this in a very linear way. Right, they do initial funding, they prove feasibility, they prove safety, they do regulatory. So it's a long, protracted process and at any point in time you could just fail, right. And so what we did is we just sort of said we, instead of having a long linear process, can we actually start to push in on, uh, product market fit, commercialization, payer engagement, um, uh, lifetime value for the customer, uh, caregiver engagement, uh, healthcare, it and cyber security and like.

Speaker 2:

So we sort of we sort of took on everything at once, but we said can we get a low resolution, sort of heat map of where the risk is all at once and then gradually add more resolution depending on where the risk is? So we don't necessarily wreck. It's not the best path for everyone. But because our system was so complex and we were asked over and over again from like venture capitalists and investors are like, well, what about the downstream risk? And we said, well, let's go ahead and answer those questions now. Right, so you know.

Speaker 2:

So as we're doing product development, we're also engaging with payers. As we're doing product development and sort of bench verification of the systems. As we're doing product development and sort of bench verification of the systems, we're also in the homes with caregivers to understand what's going on, because we want to de-risk the commercial success as much as possible and our aim isn't just to get through like a regulatory finish line, because a lot of folks they say if I raise a bunch of money, I'm done. Or if I've gotten fda clearance, I'm done, and it's like you're not done until the patient is using your stuff.

Speaker 3:

Yes, agreed yeah, all the hurdles need to be cleared, yeah and in this space, particularly in neurology, there are a lot of one-time filings for companies. Yes, right, it's, there's not that many that have multiple. So you're thinking, okay, they either created the product and it's just in sustaining forever with very minor changes, or it didn't work out right. Yeah, sort of other one. Um, neither of which is great that is interesting.

Speaker 1:

Incidentally, just today I was at an event that was about how modularizing software as a medical device regulatory submissions is the way to approval. So today it's very topical. I'm happy to hear that.

Speaker 2:

That's just general validation so, and and we've also even, um, you know, as the topic of your you know, your sort of audience is really around clinical validation and trials and the process, um, you know, uh, you know we're happy to say that we've also engaged with cro's right so contract research organizations, and you know, they actually came to us asking you know, could they use our platform for pivotal trials on some of these other things that they're representing?

Speaker 2:

They said, you know, you know, trying to go into a pivotal trial on a Microsoft HoloLens, which a lot of the FDA filings today are that have to do with augmented or virtual reality, are either on like a meta quest system or they're on a microsoft hololens system, neither of which are designed to medical grade standards.

Speaker 2:

Uh, meta is clearly not really their strategy is not health care, their strategy is gaming and, yeah, whatever, um, hololens, microsoft just discontinued the hololens, so I don't know what all these companies are going to do that are invested in hololens, but they're going to have to find an AR platform to move to, and HoloLens was actually never designed for patient use, right? So, greg, you can maybe talk about some of the FDA clear headsets, but we're in an interesting position where we're the leader as a medical grade. We've designed to medical grade standards right now for research use initially, um uh, which will then go into clinical trials. You know, but there really isn't another company if you want to do like scale up clinical trials into you know, phase one, phase two, phase three is on a valid, validated headset.

Speaker 2:

There's no one else that you can like.

Speaker 1:

Cognition's the one yeah, I would actually be curious. What would be the wildest thing that you would like to happen with your technology? What would be like the most amazing possible outcome if everything just lined up perfectly?

Speaker 2:

I would say that all healthcare. If you think about what you know, there's a few things. So there's a very real situation regarding like within neurology. So, statistically, within the United States there's about 50 to 60 million people that have diagnosed brain, functional brain health or mental health conditions that need access to a neurologist. So 50 million, just say as a round number, just in the US, and there's 15,000 practicing neurologists in the United States. 15,000. Yeah, in the United States 15,000.

Speaker 2:

So the patient population. That's why you say, like many people, they might have nine to 12 months wait time to get time to see a neurologist, right? And if you think about this geographically on the distribution of this, most of the neurologists are in cities, mm-hmm, there's about 69,'s, about 69, 69. There's 69 neurologists that are in rural areas in the united states. So the entire midwest, like all the all the red republican states, you have six, you have 69 neurologists and more than 50 million people that need a neurologist. And so, going back to greg's sort of impetus to get into the field of like boy, wouldn't it be nice to be able to drop ship a headset, like cognitions, drop it in the mail, send it to someone. They could open it up, put it on their head and then initiate a remote therapy session with a neurologist and and guide them through a series of clinically validated applications for whatever the conditions are that are going on with that patient in their home. Like. Imagine what that could look like. Like you'd massively disrupted the care delivery process.

Speaker 2:

Reduced costs provided greater access and democratization, deep personalization across the spectrum, whether it's a TBI or a brain injury or a genetic issue or whatever. Like. Imagine the quality of life. That could happen across the board and that could be sub-chronic use cases where it's someone that maybe needs it for 30 days because they've been prescribed a headset for some kind of rehab Right. Or they might need it just for a day for some diagnostic measurements yeah. Or they might need it prescribed for their entire life because they have, like, als and they're going to need you know, the degenerative condition is going to glide path and they're going to need to have progressively more and more AI helping them live their quality of life.

Speaker 2:

So in five to 10 years you could imagine that there could be the world's first clinically validated app store that's full of apps that are either developed by Cognition or by major institutions like Mayo Clinic, cleveland Clinic, mount Sinai, cedars, you know Holland, bloorview, others, you know Children's, you know CHOP in Pittsburgh. So you look at all these health systems that have the ideas, they have the clinical expertise, they have the passion and they have the funding that they could be buying our systems, rapid prototyping novel ideas and then spinning out those applications out of their own health systems to make them available in a validated app store that could be deployed on a headset anywhere in the world Like. To me, that's transformative.

Speaker 3:

I don't know what you think, greg, but you know you're always the the, the, the bigger visionary than myself. But you know, for me, honestly, if we had just even just one home run therapy that you know, that just took us to IPO, took us to acquisition, whatever, if we could just have one therapy that was really effective at treating depression, just depression, that would be huge. I depression, just depression. Yeah, that would be huge. I mean, the US alone is facing a huge mental health crisis. Right, there's not enough providers. Covid changed the way a lot of people interact with therapists, some for the better, some not. But you know, even if we just had that, one thing like that would be incredible.

Speaker 1:

The benefits to patients would be enormous, just for the one therapy yeah, therapy is expensive, right, it's super expensive.

Speaker 3:

So if you had a therapy that was really like, okay, you buy this hardware, maybe it's a subscription, whatever but you can do it every day. You know, 30 minutes, 40 minutes, Like that's game changing, right? Yeah?

Speaker 2:

It's enormous, yeah, yeah. So I don't disagree with Greg. That's a base hit, but you know we're thinking about changing the world, you know, but it's, you know, one indication at a time. I guess you know?

Speaker 1:

Yeah, but it's one indication at a time, I guess yeah, and now we've talked a little bit about how you could change the entire world. Towards the end of our podcast, we always ask the guests the same question, which is, if we gave you the transformation trials magic wand, and now this magic wand can only change one thing in the life sciences industry. It's a very specific magic wand and now this magic wand can only change one thing in the life sciences industry. It's a very specific magic wand. Um, what would you wish to change in this specific industry? And I'll ask both of you individually, curious to hear where you, whether you, converge or move apart on this answer greg, you lead the way on this one.

Speaker 3:

Only one thing eh.

Speaker 1:

Yeah.

Speaker 3:

There's so many things I would change. You know, one of the things that drives me nuts about just regulatory filings and medical devices is the least burdensome path, because it actually ends up being quite a burden to try and figure out what it is. In many ways, I wish that you know that regulatory bodies were actually just more prescriptive about. This is what we want, but it's very non-committal, right. I mean, they'll say this is non-binding guidance. So we think you should do this, but we reserve the right to change our mind without notice and for any reason. So it would be great if we just had, you know, for, for established codes, right, established regulations like this is precisely what we want. And when they don't want something different, you know, they tell you that, yeah, rather than kicking back, uh, you know a 510k, and saying, oh, you should go do this testing now. Yeah, so that that's my thing. It's definitely going to be different than Andreas.

Speaker 2:

Yeah, you know, what's interesting is we, you know we actually just we just have a new investor that joined us, who she's in Europe and so she's very keen on the NBR regs and she ran. She built and ran and sold a very significant CRO and so she gives us a wealth of knowledge about sort of the clinical research process and where its strengths and weaknesses are, and we've spent late nights talking about these sort of frustrations, but I think the simplest soundbite is it could take anywhere from 25 to 300 million dollars $300 million in R&D to get a therapeutic medical device cleared, just to get cleared, and more than 60% of that money is not in product development, it's in clinical validation.

Speaker 1:

That's significant, and it's not even in the hands of the patients.

Speaker 2:

yes, yet yeah that's the problem, right. And so how do you make clinical research cheaper, faster, easier? Right, it's not rocket science and it's highly templatized. It could be right, and it's something that ai or other things could really gut a lot of the costs in clinical research, you know.

Speaker 3:

So there could probably be some real ways to address speed to market cost of clinical trials without jeopardizing efficacy. So one of the really great things about just to segue from what Andreas is saying, this is actually something I'm really excited about with, with what we're doing. But, as he mentioned, the cost of clinical data is huge, right, and it's not getting cheaper. Actually, it's getting more expensive because the demand for data around safety and efficacy is only getting higher. But wouldn't it be great if there was a platform that provided a consistent experience and a way for researchers to collaborate on shared data or make their data available in a consistent format? Yes, you know there there's lots of problems out there, but I mean they definitely have commonalities. Or or there's multiple researchers working in the same thing. They get on the cognition platform and you know we offer a way of sharing clinical data. I mean that's that's a huge value add if, if you know, we offer a way of of sharing clinical data. I mean that's that's a huge value add.

Speaker 2:

If, if you know, if ivana does, do any companies come in mind based on what greg's describing um?

Speaker 3:

no so there's some recent publications on on um on sharing clinical data, how that might work. What are the ethical considerations? It probably isn't as relevant to the pharmaceutical industry as it is medical devices.

Speaker 1:

Yes, but, that being said, a lot of our guests do argue that more data sharing would lead to increased speed to market for clinical trial development. And just like it would be valuable to know what didn't work, which compounds didn't work, for which indications with other companies, just the elimination of those possibilities for a clinical program would be extremely helpful.

Speaker 2:

I can also add to that. What we've also heard from a lot of our customers is that by using sort of like a mishmash of different technologies right and trying to integrate those technologies and try and have a consistency as far as a consistent reliability across sessions and tests with patients, Oftentimes they net out with like 80 to 90% loss in data quality, so they have to throw away most of their data Wow.

Speaker 2:

Right, and so that's something that we find to be very important as a value proposition, is that the net yield of quality data that you would get with a system like ours is head and shoulders. It's exponentially greater. It's a much better ROI on clinical dollars spent for data on a platform like ours, and I think. My other comment is I think you probably can vouch for this is there's an emergence of combining closed loop uh device technology with drugs right?

Speaker 2:

so there's drug device combo, especially in like psychedelics and anything to do with mental health, um, adhd, attention focused issues, um, uh, you know nervous system disorders, so so if there are folks that are in your audience that are working on pharmacological, you know farm, you know nervous system disorders. So so if there are folks that are in your audience that are working on pharmacological, you know biopharma drugs and want to go into testing with a device like ours that's focused on neurodegenerative conditions, alzheimer's, dementia, multiple sclerosis, als these are intractable problems that everybody's trying to solve and there could be opportunities to bring both together in some landmark studies, you know, using our technology and some really novel drugs or therapies.

Speaker 1:

Well, Andreas, Greg, if any of my listeners, incidentally, think you know what, this is something that I would like to learn more about and potentially do a joint clinical study. Where can they reach out to you?

Speaker 2:

Yeah, you could initially just check out what we're doing on cognitioncom C-O-G-N-I-X-I-O-Ncom, and from there you can just contact us through our contact form, or you can find us also on LinkedIn. Gregory and I are both on LinkedIn, so Andreas Forsland and Gregory Johns.

Speaker 1:

Thank you so much. This was a really an enjoyable conversation for me, and I'm sure that our listeners will enjoy it too.

Speaker 2:

Likewise, ivana. It's been really great to be here. Thank you, thank you likewise, ivana.

Speaker 1:

It's been really great to be here, thank you. Thank you. You're listening to transformation in trials. If you have a suggestion for a guest for our show, reach out to sam parnell or ivana rosendale on linkedin. You can find more episodes on apple podcasts, spotify, google podcasts or in any other player. Remember to subscribe and get the episodes hot off the editor.

People on this episode