Follow Wired

Yonah Welker is bridging technological frontiers in the age of AI

LEAP 2024 guest speaker Yonah Welker talks about the coming together of humanity and technology.

 Photo-illustration: Nadia Middle East/ WIRED Middle East

Returning to Riyadh for LEAP 2024, Yonah Welker, a serial technologist, and a proponent of cross-cultural collaboration, shared his insights on the symbiotic relationship between technology and humanity, “Two years ago we started the global movement of AI for Humanity. And the main idea behind this movement was that no matter what type of algorithm or system I would love to build, we’d have people and humanity behind it, and it’s why it affects the data, the historical context behind it. The potential biases, positive or negative outcomes,” he says.

Welker’s journey traverses the divide between the western and eastern hemispheres, embodying the essence of a technology ambassador, “My personal focus is algorithms focus on cognitive, sensory and physical spectrum such as AI for dyslexia, cognitive impairments, autism, medical technologist education, and on one hand, I oversee portfolios in technology transfer in this field on behalf of the government on behalf of my personal projects. Also, I work on policies to better understand how to adopt these technologies.” His mission? To infuse every algorithm and system with the essence of human experience, transcending mere lines of code.

Welker spoke of his focus on algorithms catering to cognitive, sensory, and physical spectrums, emphasizing applications like AI for dyslexia, autism, and medical education. His endeavors extend beyond mere technological innovation; they encapsulate a profound commitment to human welfare.

The idea is that it is the worker, the everyday man or woman who will be using AI technologies, a nurse, a doctor, a teacher, and so Welker works with UNESCO and governments to foster AI education, “I work on policies to better understand how to adopt these technologies. It was not only the system, but the classroom, the teacher or nurse who will use these technologies provide better understanding of literacy in capacities,” he says.

Reflecting on his collaborations, Welker underscored the transformative potential of technology in fostering global harmony. He recounted a poignant initiative that aimed to unite two Middle Eastern nations through collaborative technological endeavors, a testament to the unifying power of computer science, he says, “I still believe that when we work together, then we respect the diversity behind the teams. We can create not only the human centered systems for hospitals, schools, workplaces, but actually solve bigger social challenges for more collaborative cooperation. at national and regional level. That’s my message.”

In navigating the complexities of international collaboration, Welker confronts multifaceted challenges. He acknowledges the nuanced intersectionalities within data, emphasizing the imperative of gender-specific studies and the recognition of historically marginalized communities, “AI is just two lines of code. Everything that happens, whether positive or negative, is actually about society. An algorithm is just a reflection of society. For instance, when we build AI for autism, we discover cognitive impairments unique for girls. We need to conduct gender-specific studies to better understand the data in this area. For years, medical studies primarily focused on men, excluding women and disregarding intersectionality. When discussing cognitive spectrums, we often overlook comorbidities or underlying conditions, as well as how some specific conditions are unique to certain ethnicities, geographical locations, or communities that were histori cally excluded from access to hospitals and classrooms,” he says.

Today, we frequently encounter generative AI, which integrates supervised, unsupervised, and reinforcement learning methods. Welker highlights that its crucial to acknowledge that each of these approaches carries its own inherent biases or inaccuracies. For example, supervised learning may exhibit bias stemming from individual subjectivity, whereas unsupervised learning, like DNA clustering, can be influenced by biases or inaccuracies within public datasets, he says. Additionally, reinforcement learning, reliant on reward systems and environmental factors, can be constrained by its surroundings, leading to potential errors. Given the complexity of these factors, developing an app for autism presents a distinct challenge, necessitating consideration for the child, parent, mother, and educator. According to Welker, this challenge encompasses both research and design facets, requiring thorough oversight to ensure efficacy and inclusivity.

Welker ended his discussion by saying “Two years ago, when I first visited Riyadh and worked on the global AI Summit, the idea of human capacity was at the forefront. Two years ago, the Saudis signed the agreement and declaration of AI ethics with UNESCO. Now, through the Center of AI Ethics and Research here in Riyadh, we can see how it comes in parallel, not only with the EU and AI and digital services, but also actively working with our counterparts globally. We all agree that we should balance the facilitation of technology and the protection of people, because humanity is always behind the system.”

For Yonah Welker, the trajectory forward is clear: to forge a future where humanity remains at the forefront of technological evolution.

More great stories from WIRED

🪩 The tech behind Taylor Swift’s concert wristbands

🤳 Are you looking for the best dumb phones in 2023?

🦄 The 2023 top startups in MENA, who’s the next unicorn?

🧀 Italian cheesemakers are putting microchips in their Parmesan

🖤 The pros and cons of tattoos

🥦 Your genes can make it easier (or harder) to be a vegetarian

✨ And be sure to follow WIRED Middle East on Instagram, Twitter, Facebook, and LinkedIn

 
 

 

RECOMMENDED

Suggestions
Articles
View All
Topics