<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>G-Tech &#8211; Akingate Consultancy</title>
	<atom:link href="https://www.akingate.com/category/technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.akingate.com</link>
	<description>Technology, Creativity And Innovations</description>
	<lastBuildDate>Tue, 11 Nov 2025 20:02:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">201648423</site>	<item>
		<title>ChatGPT-5: The Next-Level AI Revolution for Smarter Conversations in 2025</title>
		<link>https://www.akingate.com/chatgpt-5-the-next-level-ai-revolution-for-smarter-conversations-in-2025/</link>
					<comments>https://www.akingate.com/chatgpt-5-the-next-level-ai-revolution-for-smarter-conversations-in-2025/#comments</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Fri, 08 Aug 2025 14:55:53 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[AI productivity]]></category>
		<category><![CDATA[AI tools 2025]]></category>
		<category><![CDATA[GPT-5 features]]></category>
		<category><![CDATA[multimodal AI]]></category>
		<category><![CDATA[OpenAI ChatGPT update]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=6041</guid>

					<description><![CDATA[Reading time: 8–10 minutes ChatGPT-5 brings multimodal understanding and sharper reasoning to everyday workflows. In this article: What is ChatGPT-5? How ChatGPT-5 Differs from GPT-4 Key Features That Make ChatGPT-5 a Game-Changer Real-Life Uses of ChatGPT-5 Pros and Cons of [&#8230;]]]></description>
										<content:encoded><![CDATA[<!-- GUTENBERG-READY HTML (paste into a Custom HTML block or convert to blocks) --><!-- Donation banner -->
<p><em style="font-size: revert; color: initial;">Reading time: 8–10 minutes</em></p>
<article><header>
<figure>
<figcaption>ChatGPT-5 brings multimodal understanding and sharper reasoning to everyday workflows.</figcaption>
</figure>
</header><!-- Excerpt (paste into WP Excerpt field if you use it) --> <!-- ChatGPT-5 is here, and it’s not just another AI upgrade — it’s a game-changer. From sharper reasoning to human-like creativity, this next-gen AI is pushing the boundaries of what’s possible in digital conversations. --><nav class="toc" style="background: #f7f7fb; border: 1px solid #eee; padding: 1rem; border-radius: 8px;" aria-label="Table of contents"><strong>In this article:</strong>
<ol>
<li><a href="#what-is-chatgpt-5">What is ChatGPT-5?</a></li>
<li><a href="#gpt5-vs-gpt4">How ChatGPT-5 Differs from GPT-4</a></li>
<li><a href="#key-features">Key Features That Make ChatGPT-5 a Game-Changer</a></li>
<li><a href="#use-cases">Real-Life Uses of ChatGPT-5</a></li>
<li><a href="#pros-cons">Pros and Cons of ChatGPT-5</a></li>
<li><a href="#best-practices">How to Get the Best Out of ChatGPT-5</a></li>
<li><a href="#future">The Future Beyond GPT-5</a></li>
<li><a href="#faq">Quick FAQ About ChatGPT-5</a></li>
</ol>
</nav>
<section id="what-is-chatgpt-5">
<h2>What is ChatGPT-5?</h2>
<p>If you thought <a href="https://www.akingate.com/openai-gpt-4-5-with-greater-eq-improved-abilities/">ChatGPT‑4</a> was impressive, buckle up — ChatGPT‑5 levels up everything from comprehension to creativity. Built on next‑gen architecture, it’s designed to understand context more deeply, reason through complex prompts, and interact in ways that feel startlingly natural.</p>
<p>Instead of just <em>responding</em>, ChatGPT‑5 can <em>reason, analyze, and create</em> across formats. Whether you’re drafting proposals, explaining data, or producing content, it acts like a creative partner rather than a simple chatbot.</p>
</section>
<section id="gpt5-vs-gpt4">
<h2>How ChatGPT-5 Differs from GPT-4</h2>
<ul>
<li><strong>Multimodal understanding:</strong> Works across text, images, audio, and short video.</li>
<li><strong>Better context retention:</strong> Smoother long-form conversations and projects.</li>
<li><strong>Sharper reasoning:</strong> More coherent step-by-step breakdowns for complex tasks.</li>
<li><strong>Personalized interactions:</strong> Adapts to your tone and preferences.</li>
<li><strong>Reduced hallucinations:</strong> Improved factual accuracy and guardrails.</li>
</ul>
<p style="background: #fff8e6; border-left: 4px solid #ffbf00; padding: 0.75rem 1rem; margin: 1rem 0;"><strong>Tip:</strong> When you compare GPT-5 vs GPT-4, test identical prompts and evaluate clarity, correctness, and speed.</p>
</section>
<section id="key-features">
<h2>Key Features That Make ChatGPT-5 a Game-Changer</h2>
<h3>Multimodal Magic</h3>
<p>Upload an image, ask a question about it, get a voice answer, or have GPT‑5 summarize a short clip — all in one session. It turns visuals and audio into actionable insights.</p>
<h3>Hyper-Personalization</h3>
<p>Prefer concise answers or deep dives? GPT‑5 adapts. Over time, it mirrors your style and context, making collaboration feel effortless.</p>
<h3>Advanced Reasoning &amp; Analysis</h3>
<p>From untangling RFPs to explaining statistical models in plain English, GPT‑5 handles layered instructions and delivers step‑wise logic.</p>
<h3>Collaborative Creativity</h3>
<p>Toss in half-baked ideas and get polished outlines, scripts, or product angles. It’s like brainstorming with a tireless, well‑read partner.</p>
<h3>Accessibility Upgrades</h3>
<p>With stronger multilingual and voice capabilities, GPT‑5 lowers barriers for global teams and learners.</p>
</section>
<section id="use-cases">
<h2>Real-Life Uses of ChatGPT-5</h2>
<h3>For Businesses</h3>
<ul>
<li>Automate support with natural, friendly replies.</li>
<li>Draft multi-language campaigns and landing copy.</li>
<li>Turn messy notes into action plans and summaries.</li>
</ul>
<h3>For Education</h3>
<ul>
<li>On-demand tutoring with interactive explanations.</li>
<li>Turn complex topics into visuals and analogies.</li>
<li>Generate essay outlines, quiz items, and feedback.</li>
</ul>
<h3>For Creatives</h3>
<ul>
<li>Brainstorm blog topics and video scripts fast.</li>
<li>Storyboards and concept art prompts in minutes.</li>
<li>SEO descriptions and product copy that convert.</li>
</ul>
</section>
<section id="pros-cons" style="display: grid; grid-template-columns: 1fr; gap: 1rem;">
<div>
<h2>Pros of ChatGPT-5</h2>
<ul>
<li>Highly accurate, context-aware responses.</li>
<li>True multimodal workflows (text, image, audio, video).</li>
<li>Personalized outputs aligned to your voice.</li>
<li>Faster processing and response times.</li>
</ul>
</div>
<div>
<h2>Cons (What to Watch Out For)</h2>
<ul>
<li>Still benefits from clear, specific prompts.</li>
<li>Critical facts may require verification.</li>
<li>Some features could sit behind paywalls.</li>
</ul>
</div>
</section>
<section id="best-practices">
<h2>How to Get the Best Out of ChatGPT-5</h2>
<ol>
<li><strong>Be specific:</strong> “Give 5 innovative marketing ideas for a neighborhood bakery in 2025” beats “marketing tips.”</li>
<li><strong>Use follow-ups:</strong> Iterate to refine tone, length, or format.</li>
<li><strong>Leverage multimodality:</strong> Add images, docs, or audio for richer context.</li>
<li><strong>Set roles:</strong> “Act as a senior copywriter / data analyst / tutor” to guide outputs.</li>
</ol>
<div style="background: #0b5fff; color: #fff; padding: 1rem 1.25rem; border-radius: 10px; margin: 2rem 0;"><strong>Copy-paste prompt:</strong> “Act as a product marketer. Turn these feature notes into a 120-word landing page hero with a CTA and 3 bullets. Keep the tone friendly and confident.”</div>
<p>Want to go deeper into <a href="https://en.wikipedia.org/wiki/Prompt_engineering" target="_blank" rel="noopener">prompt engineering</a>? It’ll pay dividends.</p>
</section>
<section id="future">
<h2>The Future Beyond GPT-5</h2>
<p>With ChatGPT‑5, AI shifts from background tool to collaborative teammate. Expect tighter integrations across work suites, richer real‑time collaboration, and more intuitive voice/visual workflows.</p>
<p>Curious about the platform roadmap? Keep an eye on <a href="https://openai.com" target="_blank" rel="noopener">OpenAI’s official updates</a> for policy changes, features, and enterprise releases.</p>
</section>
<section id="faq">
<h2>Quick FAQ About ChatGPT-5</h2>
<details>
<summary>Is ChatGPT-5 available to everyone?</summary>
<p>Generally yes, but premium or enterprise features may require a paid plan.</p>
</details><details>
<summary>Does GPT-5 work offline?</summary>
<p>No — it runs in the cloud, so it needs an internet connection.</p>
</details><details>
<summary>Can ChatGPT-5 replace human jobs?</summary>
<p>It automates repeatable tasks and enhances productivity, but humans steer strategy and judgment.</p>
</details><details>
<summary>Is ChatGPT-5 safe?</summary>
<p>It ships with stronger safety layers, but you should still review sensitive outputs and protect private data.</p>
</details></section>
<section aria-label="Share">
<blockquote>“ChatGPT‑5 isn’t just an upgrade — it’s a leap forward in how we interact with technology.”</blockquote>
</section>
</article>


<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/chatgpt-5-the-next-level-ai-revolution-for-smarter-conversations-in-2025/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6041</post-id>	</item>
		<item>
		<title>OpenAI GPT-4.5, Most Knowledgeable Model with Greater EQ and Improved Abilities</title>
		<link>https://www.akingate.com/openai-gpt-4-5-with-greater-eq-improved-abilities/</link>
					<comments>https://www.akingate.com/openai-gpt-4-5-with-greater-eq-improved-abilities/#comments</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Sat, 01 Mar 2025 19:16:39 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI in customer service]]></category>
		<category><![CDATA[AI model advancements 2025]]></category>
		<category><![CDATA[Applications of GPT-4.5 in business]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Future of OpenAI models.]]></category>
		<category><![CDATA[GPT-4.5 accessibility]]></category>
		<category><![CDATA[GPT-4.5 emotional intelligence]]></category>
		<category><![CDATA[GPT-4.5 reduced hallucinations]]></category>
		<category><![CDATA[GPT-4.5 vs GPT-4]]></category>
		<category><![CDATA[OpenAI GPT-4.5 features]]></category>
		<category><![CDATA[OpenAI language model updates]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5821</guid>

					<description><![CDATA[Introduction OpenAI has unveiled GPT-4.5, a groundbreaking advancement in artificial intelligence designed to enhance accuracy, reasoning, and emotional intelligence (EQ). This latest iteration builds on its predecessors, offering users a more reliable and empathetic AI experience. As AI continues to [&#8230;]]]></description>
										<content:encoded><![CDATA[<div class="theconversation-article-body">
<h3><strong>Introduction</strong></h3>
<p>OpenAI has unveiled <a href="https://www.akingate.com/fewer-generic-ai-chatbots-like-chatgpt/">GPT</a>-4.5, a groundbreaking advancement in artificial intelligence designed to enhance accuracy, reasoning, and emotional intelligence (EQ). This latest iteration builds on its predecessors, offering users a more reliable and empathetic AI experience. As AI continues to shape industries, GPT-4.5 represents a significant leap forward in language modeling, catering to businesses, researchers, and general users alike.</p>
<h3><strong>What is GPT-4.5?</strong></h3>
<p>GPT-4.5 is the newest version of OpenAI’s language model, developed to provide enhanced comprehension, logical reasoning, and improved contextual awareness. Compared to previous models, it offers a broader knowledge base, reduced misinformation, and an ability to understand and respond to human emotions more effectively. OpenAI has tailored this version to meet the needs of professionals in diverse fields, from content creation to customer service and education.</p>
<h3><strong>Key Enhancements and Features</strong></h3>
<h4><strong>Greater Knowledge Base &amp; Accuracy</strong></h4>
<p>GPT-4.5 benefits from an expanded training dataset, improving its ability to generate fact-based responses across a wider range of topics. With refined algorithms, it can now provide more accurate and contextually relevant information.</p>
<h4><strong>Reduced Hallucinations</strong></h4>
<p>A major improvement in GPT-4.5 is the reduction of “hallucinations” or instances where AI generates incorrect information. The hallucination rate has been lowered from 61.8% in GPT-4 to 37.1%, significantly enhancing reliability.</p>
<h4><strong>Improved Emotional Intelligence (EQ)</strong></h4>
<p>One of the standout features of GPT-4.5 is its heightened emotional intelligence. By analyzing text tone, sentiment, and context more effectively, it can generate responses that are not only factually correct but also empathetic and contextually appropriate.</p>
<h4><strong>Advanced Reasoning and Problem-Solving Capabilities</strong></h4>
<p>GPT-4.5 demonstrates superior multi-step reasoning skills, making it highly effective for problem-solving in fields like programming, mathematics, and scientific research. Its ability to break down complex problems into manageable steps has been greatly enhanced.</p>
<h3><strong>Real-World Applications of GPT-4.5</strong></h3>
<h4><strong>Business &amp; Customer Support</strong></h4>
<p>GPT-4.5 enhances automated customer service by providing more human-like interactions. Businesses can integrate it into chatbots and virtual assistants to improve customer experience, reduce response time, and increase engagement.</p>
<h4><strong>Education &amp; Learning</strong></h4>
<p>As an AI tutor, GPT-4.5 can provide students with personalized learning experiences, offering explanations tailored to individual needs. It also assists educators in creating content and automating administrative tasks.</p>
<h4><strong>Content Creation &amp; Creativity</strong></h4>
<p>Writers, marketers, and content creators benefit from GPT-4.5’s improved ability to generate high-quality articles, scripts, and marketing copy. The model offers enhanced creativity and ideation, making it a valuable tool in media and entertainment industries.</p>
<h4><strong>Healthcare &amp; Mental Health Assistance</strong></h4>
<p>GPT-4.5 can assist healthcare professionals by providing preliminary symptom analysis and mental health support. Its improved EQ makes it more effective in mental health applications, offering empathetic responses to users seeking support.</p>
<h3><strong>Accessibility &amp; Availability</strong></h3>
<p>Initially, GPT-4.5 is available to ChatGPT Pro subscribers and developers as part of a research preview. OpenAI plans to gradually roll it out to additional user tiers in the coming weeks. Given its advanced capabilities, pricing considerations will play a crucial role in its accessibility to a broader audience.</p>
<h3><strong>Challenges and Limitations</strong></h3>
<p>Despite its advancements, GPT-4.5 comes with challenges:</p>
<ul>
<li><strong>High Computational Cost</strong> – Running GPT-4.5 requires substantial computational power, making it resource-intensive.</li>
<li><strong>Potential Ethical Concerns</strong> – OpenAI continues to address biases and ethical considerations to ensure responsible AI use.</li>
<li><strong>Need for Human Oversight</strong> – While improved, AI-generated content still requires human verification to maintain accuracy and ethical standards.</li>
</ul>
<h3><strong>Future of OpenAI and AI Advancements</strong></h3>
<p>With the launch of GPT-4.5, the AI landscape is evolving rapidly. OpenAI has hinted at continued improvements, with speculation about the eventual release of GPT-5. The organization remains committed to responsible AI development, balancing innovation with ethical considerations.</p>
<h3><strong>Conclusion</strong></h3>
<p>GPT-4.5 is a significant step forward in AI technology, offering enhanced knowledge accuracy, reasoning, and emotional intelligence. Businesses, educators, and content creators stand to benefit from its advancements, while OpenAI continues to refine AI’s role in everyday life. As AI progresses, GPT-4.5 paves the way for even more sophisticated and responsible AI applications.</p>
<h3><strong>FAQs</strong></h3>
<p><strong>What are the main differences between GPT-4 and GPT-4.5?</strong></p>
<p>GPT-4.5 improves upon GPT-4 with a larger knowledge base, better emotional intelligence, and a lower rate of misinformation.</p>
<p><strong>How does GPT-4.5 improve emotional intelligence in AI?</strong></p>
<p>It analyzes tone and sentiment more effectively, allowing for more empathetic and context-aware responses.</p>
<p><strong>Is GPT-4.5 available for free users?</strong></p>
<p>Currently, it is accessible to ChatGPT Pro users and developers, with plans for wider availability.</p>
<p><strong>How does OpenAI plan to address AI-related ethical concerns?</strong></p>
<p>OpenAI is implementing bias reduction techniques and human oversight mechanisms to ensure ethical AI use.</p>
<p><strong>Will GPT-4.5 require more computational resources than previous models?</strong></p>
<p>Yes, due to its enhanced capabilities, GPT-4.5 demands higher computational power, making it costlier to run.</p>
<p>&nbsp;</p>
</div>
<p><strong>Citations</strong></p>
<ul>
<li><a href="https://nypost.com/2025/02/28/business/sam-altmans-openai-launches-gpt-4-5-with-fewer-hallucinations-as-ai-race-heats-up/" target="_blank" rel="noopener">nypost.com</a> (Sam Altman&#8217;s OpenAI launches GPT-4.5 with fewer &#8216;hallucinations&#8217; as AI race heats up)</li>
<li><a href="https://chrisyandata.medium.com/openai-has-unveiled-gpt-4-5-does-it-worth-d777e77dd0dc" target="_blank" rel="noopener">chrisyandata.medium.com</a> (OpenAI has unveiled GPT-4.5 Does it worth? | by Chris Yan &#8211; Medium)</li>
<li><a href="https://www.business-standard.com/technology/tech-news/openai-releases-gpt-4-5-ai-model-with-greater-eq-all-you-need-to-know-125022800262_1.html" target="_blank" rel="noopener">business-standard.com</a> (OpenAI releases GPT-4.5 AI model with greater &#8216;EQ&#8217;: All you need to know)</li>
<li><a href="https://www.businessinsider.com/openai-sam-altman-releases-gpt-4-5-emotionally-intelligent-model-2025-2" target="_blank" rel="noopener">businessinsider.com</a> (Sam Altman says OpenAI&#8217;s new ChatGPT-4.5 is a more emotionally intelligent model but warns that it&#8217;s &#8216;expensive&#8217; to train and run)</li>
</ul>
<p><a href="https://www.freepik.com/free-photo/programming-background-collage_34089166.htm#fromView=search&amp;page=1&amp;position=3&amp;uuid=d395473b-2486-4795-95ab-bba88b0a5392&amp;query=AI+program" target="_blank" rel="noopener">Image by freepik</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/openai-gpt-4-5-with-greater-eq-improved-abilities/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5821</post-id>	</item>
		<item>
		<title>DeepSeek: China’s gamechanging AI system has big implications for UK tech development</title>
		<link>https://www.akingate.com/deepseek-chinas-gamechanging-ai-system-has-big-implications-for-uk-tech-development/</link>
					<comments>https://www.akingate.com/deepseek-chinas-gamechanging-ai-system-has-big-implications-for-uk-tech-development/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Tue, 28 Jan 2025 20:25:14 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deepseek]]></category>
		<category><![CDATA[Give me perspective]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5812</guid>

					<description><![CDATA[DeepSeek sent ripples through the global tech landscape this week as it soared above ChatGPT in Apple’s app store. The meteoric rise has shifted the dynamics of US-China tech competition, shocked global tech stock valuations, and reshaped the future direction [&#8230;]]]></description>
										<content:encoded><![CDATA[<h4 class="theconversation-article-title">DeepSeek <a href="https://www.bbc.co.uk/news/articles/c0qw7z2v1pgo" target="_blank" rel="noopener">sent ripples</a> through the global tech landscape this week as it soared above ChatGPT in Apple’s app store. The meteoric rise has <a href="https://www.cnbc.com/2025/01/27/nvidia-falls-10percent-in-premarket-trading-as-chinas-deepseek-triggers-global-tech-sell-off.html" target="_blank" rel="noopener">shifted the dynamics</a> of US-China tech competition, shocked global tech stock valuations, and reshaped the future direction of artificial intelligence (AI) development.</h4>
<div class="theconversation-article-body">
<p>Among the industry buzz created by DeepSeek’s rise to prominence, one question looms large: what does this mean for the strategy of the <a href="https://aiindex.stanford.edu/vibrancy/" target="_blank" rel="noopener">third leading global nation for AI development</a> – the United Kingdom?</p>
<p>The generative AI era was kickstarted by the <a href="https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/" target="_blank" rel="noopener">release of ChatGPT</a> on November 30 2022, when large language models (LLMs) entered mainstream consciousness and began reshaping industries and workflows, while everyday users explored new ways to write, brainstorm, search and code. We are now witnessing the “DeepSeek moment” – a pivotal shift that demonstrates the viability of a more efficient and cost-effective approach for AI development.</p>
<p>DeepSeek isn’t just another AI tool. Unlike ChatGPT and other major LLMs developed by tech giants and AI startups in the USA and Europe, DeepSeek represents a significant evolution in the way AI models are developed and trained.</p>
<p>Most existing approaches rely on large-scale computing power and datasets (used to “train” or improve the AI systems), limiting development to very few extremely wealthy market players. DeepSeek not only demonstrates a significantly cheaper and more efficient way of training AI models, its <a href="https://tlo.mit.edu/understand-ip/exploring-mit-open-source-license-comprehensive-guide" target="_blank" rel="noopener">open-source “MIT” licence</a> (after the Massachusetts Institute of Technology where it was developed) allows users to deploy and develop the tool.</p>
<p>This helps democratise AI, taking up the mantle from US company OpenAI – whose <a href="https://openai.com/our-structure/" target="_blank" rel="noopener">initial mission was</a> “to build artificial general intelligence (AGI) that is safe and benefits all of humanity” – enabling smaller players to enter the space and innovate.</p>
<p>By making cutting-edge AI development accessible and affordable to all, DeepSeek has reshaped the competitive landscape, allowing innovation to flourish beyond the confines of large, resource-rich organisations and countries.</p>
<p>It has also set a new benchmark for efficiency in its approach, by training its model at a fraction of the cost, and matching – even surpassing – the performance of most existing LLMs. By employing innovative algorithms and architectures, it is delivering superior results with significantly lower computational demands and environmental impact.</p>
<h2>Why DeepSeek matters</h2>
<p>DeepSeek was conceived by a group of <a href="https://www.wired.com/story/deepseek-china-model-ai/" target="_blank" rel="noopener">quantitative trading experts</a> in China. This unconventional origin holds lessons for the UK and the US.</p>
<p>While the UK – particularly London – has long attracted scientific and technological excellence, many of the highest achieving young graduates have tended to disproportionately opt for <a href="https://ifamagazine.com/finance-remains-most-attractive-career-choice-as-low-pay-a-top-job-concern-for-grads/?utm_source=chatgpt.com" target="_blank" rel="noopener">careers in finance</a>, something that has come at the expense of innovation in other critical sectors such as AI. Diversifying the pathways for STEM (science, technology, engineering and maths) professionals could yield transformative outcomes.</p>
<p>The UK government’s recent and much-publicised 50-point <a href="https://www.gov.uk/government/publications/ai-opportunities-action-plan-government-response/ai-opportunities-action-plan-government-response" target="_blank" rel="noopener">action plan on AI</a> offers glimpses of progressive intent but also displays a lack of boldness to drive real change. Incremental steps are not sufficient in such a fast-moving environment. The UK needs a new plan that leverages its unique strengths while addressing systemic weaknesses.</p>
<p>Firstly, it’s important to recognise that the UK’s comparative advantage lies in its leading interdisciplinary expertise. World-class universities, thriving fintech and dynamic professional services, and creative sectors offer fertile ground for AI applications that extend beyond traditional tech silos. The intersection of AI with finance, law, creative industries, and medicine presents opportunities to lead in some niche but high-impact areas.</p>
<p>The UK’s funding and regulatory frameworks are due for an overhaul. DeepSeek’s development underscores the importance of agile, well-funded ecosystems that can support big, ambitious “moonshot” projects. Current UK funding mechanisms are bureaucratic and fragmented, favouring incremental innovations over radical breakthroughs, sometimes stifling innovation rather than nurturing it. Simplifying grant applications and offering targeted tax incentives for AI startups would represent a healthy start.</p>
<p>Finally, it will be critical for the UK to keep its talent in the country. The UK’s <a href="https://www.akingate.com/ai-how-it-hands-power-to-machines-to-transform-the-way-we-view-the-world/">AI</a> sector <a href="https://www.theguardian.com/science/2017/nov/02/big-tech-firms-google-ai-hiring-frenzy-brain-drain-uk-universities" target="_blank" rel="noopener">faces a brain drain</a> as top talent gravitates toward better-funded opportunities in the US and China. Initiatives such as public-private partnerships for AI research development can help anchor talent at home.</p>
<p>DeepSeek’s rise is an excellent example of strategic foresight and execution. It doesn’t merely aim to improve existing models but redefines the boundaries of how AI could be developed and deployed – while demonstrating efficient, cost-effective approaches that can yield astounding results. The UK should adopt a similarly ambitious mindset, focusing on areas where it can set global standards rather than playing catch-up.</p>
<p>AI’s geopolitics cannot be ignored either. As the US and China compete, the UK has a critical role as the trusted intermediary and ethical leader in AI governance. The UK can punch above its weight on the global stage by championing transparent AI standards and fostering international collaboration.</p>
<p>DeepSeek’s success should serve as a wake-up call. Britain has the talent, institutions and entrepreneurial spirit to be a significant leading player in AI – but it must act decisively, and now.</p>
<p>It is time to remove token gestures, embrace bold strategies that move the needle, and position the UK as a leader in an AI-driven future. This moment calls for action, not just more conversation.</p>
<p>DeepSeek has raised the bar. It is now up to the UK to meet it.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/248387/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Author: <a href="https://theconversation.com/profiles/feng-li-113073" target="_blank" rel="noopener">Feng Li</a>, Chair of Information Management, Associate Dean for Research &amp; Innovation, Bayes Business School, <em><a href="https://theconversation.com/institutions/city-st-georges-university-of-london-1047" target="_blank" rel="noopener">City St George&#8217;s, University of London</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
</div>
<p>&nbsp;</p>
<p><a href="https://www.freepik.com/free-photo/ai-technology-microchip-background-futuristic-innovation-technology-remix_16016701.htm#fromView=search&amp;page=1&amp;position=5&amp;uuid=23ed4499-6e42-48ba-9aa1-3ca51daa32a7&amp;query=AI++GPT" target="_blank" rel="noopener">Image by rawpixel.com on Freepik</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/deepseek-chinas-gamechanging-ai-system-has-big-implications-for-uk-tech-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5812</post-id>	</item>
		<item>
		<title>5 questions schools and universities should ask before they purchase AI tech products</title>
		<link>https://www.akingate.com/5-questions-schools-and-universities-should-ask-before-they-purchase-ai-tech-products/</link>
					<comments>https://www.akingate.com/5-questions-schools-and-universities-should-ask-before-they-purchase-ai-tech-products/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Mon, 15 Apr 2024 20:11:04 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[Education]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Educational technology]]></category>
		<category><![CDATA[Higher ed attainment]]></category>
		<category><![CDATA[K-12 education]]></category>
		<category><![CDATA[Schools]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[TED talks]]></category>
		<category><![CDATA[US Schools]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5743</guid>

					<description><![CDATA[Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI. These technologies [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform <a href="https://www.akingate.com/if-ai-is-to-become-a-key-tool-in-education-access-has-to-be-equal/">education</a>. The most recent? Technologies and apps that include or are powered by generative <a href="https://www.akingate.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datasets-to-avoid-bias/">artificial intelligence</a>, also known as GenAI.</p>
<p>These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his <a href="https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education" target="_blank" rel="noopener">2023 Ted Talk</a> by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”</p>
<figure><iframe src="https://www.youtube.com/embed/hJP5GqnTrNo?wmode=transparent&amp;start=0" width="440" height="260" frameborder="0" allowfullscreen="allowfullscreen"></iframe><figcaption><span class="caption">‘How AI Could Save (Not Destroy) Education’</span></figcaption></figure>
<p>As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from <a href="https://mitpress.mit.edu/9780262546065/teaching-machines/" target="_blank" rel="noopener">mechanical machines</a> to <a href="https://www.hup.harvard.edu/books/9780674011090" target="_blank" rel="noopener">computers</a>, from <a href="https://mitpress.mit.edu/9780262537445/the-charisma-machine/" target="_blank" rel="noopener">mobile devices</a> to <a href="https://www.hup.harvard.edu/books/9780674089044" target="_blank" rel="noopener">massive open online courses, or MOOCs</a> – have identified the ongoing failures of technology to transform education.</p>
<p>Yet, educational technology evangelists <a href="https://www.routledge.com/Schools-and-Schooling-in-the-Digital-Age-A-Critical-Analysis/Selwyn/p/book/9780415589307" target="_blank" rel="noopener">forget, remain unaware or simply do not care</a>. Or they may be overly optimistic that the next new technology will be different than before.</p>
<p>When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.</p>
<p>As a <a href="https://www.veletsianos.com/about-2/" target="_blank" rel="noopener">longtime researcher</a> who examines <a href="https://www.aupress.ca/books/120258-emergence-and-innovation-in-digital-learning/" target="_blank" rel="noopener">new technology in education</a>, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.</p>
<h2>1. Which educational problem does the product solve?</h2>
<p>One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?</p>
<p>To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct <a href="https://tech.ed.gov/files/2023/01/2023.01_Dear_Colleague_Federal_Funding_Technology.pdf" target="_blank" rel="noopener">needs analyses</a>, which can help them identify their most pressing concerns.</p>
<h2>2. Is there evidence that a product works?</h2>
<p>Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads <a href="http://nepc.colorado.edu/publication/ai" target="_blank" rel="noopener">some researchers</a> to encourage education policymakers to put off buying products until such evidence arises. Others suggest <a href="https://web.archive.org/web/20240409231421/https://www.linkedin.com/feed/update/urn:li:activity:7171608987631640576/" target="_blank" rel="noopener">relying on whether the product’s design is grounded in foundational research</a>.</p>
<p>Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but <a href="https://link.springer.com/article/10.1007/s11423-019-09649-4" target="_blank" rel="noopener">use multiple means for assessing the effectiveness of a product</a>. This includes reports from peers and primary evidence.</p>
<p>Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for <a href="https://hechingerreport.org/ed-tech-companies-promise-results-but-their-claims-are-often-based-on-shoddy-research/" target="_blank" rel="noopener">unsubstantiated claims</a>.</p>
<h2>3. Did educators and students help develop the product?</h2>
<p>Oftentimes, there is a “<a href="https://www.kqed.org/mindshift/26416/closing-the-gap-between-educators-and-entrepreneurs" target="_blank" rel="noopener">divide between what entrepreneurs build and educators need</a>.” This leads to products divorced from the realities of teaching and learning.</p>
<p>For example, one shortcoming of the <a href="https://laptop.org/" target="_blank" rel="noopener">One Laptop Per Child</a> program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for <a href="https://mitpress.mit.edu/9780262537445/the-charisma-machine/" target="_blank" rel="noopener">idealized younger versions of the developers themselves</a>, not so much the children who were actually using them.</p>
<p>Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators <a href="https://citejournal.org/volume-19/issue-1-19/general/learning-across-boundaries-educator-and-startup-involvement-in-the-educational-technology-innovation-ecosystem/" target="_blank" rel="noopener">work together</a> to <a href="https://web.archive.org/web/20131230135302/http://panelpicker.sxsw.com/vote/21723" target="_blank" rel="noopener">improve educational technology products</a>.</p>
<p>Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?</p>
<h2>4. What educational beliefs shape this product?</h2>
<p>Educational technology is <a href="https://www.routledge.com/Distrusting-Educational-Technology-Critical-Questions-for-Changing-Times/Selwyn/p/book/9780415708005" target="_blank" rel="noopener">rarely neutral</a>. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.</p>
<p>It is important for educational technology products to <a href="https://citejournal.org/volume-19/issue-1-19/general/learning-across-boundaries-educator-and-startup-involvement-in-the-educational-technology-innovation-ecosystem/" target="_blank" rel="noopener">support the kinds of learning environments that educators aspire for their students</a>. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.</p>
<h2>5. Does the product level the playing field?</h2>
<p>Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – <a href="https://www.science.org/doi/full/10.1126/science.aab3782" target="_blank" rel="noopener">often those who are already successful or privileged</a> – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?</p>
<p>Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/226900/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Author: <a href="https://theconversation.com/profiles/george-veletsianos-112089" target="_blank" rel="noopener">George Veletsianos</a>, Professor of learning technologies, <em><a href="https://theconversation.com/institutions/university-of-minnesota-1271" target="_blank" rel="noopener">University of Minnesota</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
<p>&nbsp;</p>
<p><strong>Image Credits:</strong> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/students-learning-alphabet-with-digital-tablets-royalty-free-image/699084035?phrase=technology+classroom&amp;adppopup=true" target="_blank" rel="noopener">Ariel Skelley via Getty Images</a></span></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/5-questions-schools-and-universities-should-ask-before-they-purchase-ai-tech-products/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5743</post-id>	</item>
		<item>
		<title>Your smart watch isn’t a medical device – but it is tracking all your health data</title>
		<link>https://www.akingate.com/your-smart-watch-isnt-a-medical-device-but-it-is-tracking-all-your-health-data/</link>
					<comments>https://www.akingate.com/your-smart-watch-isnt-a-medical-device-but-it-is-tracking-all-your-health-data/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Mon, 11 Mar 2024 12:59:14 +0000</pubDate>
				<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[Sport]]></category>
		<category><![CDATA[Apple]]></category>
		<category><![CDATA[Educate me]]></category>
		<category><![CDATA[Exercise]]></category>
		<category><![CDATA[Health]]></category>
		<category><![CDATA[Medical devices]]></category>
		<category><![CDATA[Smartwatch]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5728</guid>

					<description><![CDATA[For millions of people, smartwatches aren’t just a piece of technology. They can use them to take control of their health in ways never thought possible. As you go on your morning run, a smartwatch can monitor the rhythmic pounding [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>For millions of people, smartwatches aren’t just a piece of technology. They can use them to take control of their health in ways never thought possible.</p>
<p>As you go on your morning run, a smartwatch can monitor the rhythmic pounding of your feet and your heart’s steady beat. The watch can record the <a href="https://www.which.co.uk/news/article/can-you-trust-fitness-tracking-stats-ajJf85J6C42l" target="_blank" rel="noopener">distance covered and the intensity of your workout</a>, guiding you towards your fitness goals.</p>
<p>During lunch, you can use it to <a href="https://blog.fitbit.com/fitbit-calories-in-vs-out/" target="_blank" rel="noopener">log calories for a BLT sandwich</a>. As deadlines loom, they can offer gentle reminders to take a moment for yourself. And as you doze off, they <a href="https://www.zdnet.com/article/samsung-galaxy-watch-gets-first-ever-fda-clearance-for-sleep-apnea-detection/" target="_blank" rel="noopener">might pick up instances of apnoea</a> or other sleep disturbances.</p>
<p>But some users could also conflate health tips with medical advice. Device and app developers have <a href="https://www.cnet.com/tech/mobile/features/fitbit-apple-know-smartwatches-arent-medical-devices-but-do-you/" target="_blank" rel="noopener">consistently made it clear</a> that their products cannot replace a professional medical doctor’s advice or treatment.</p>
<p>A smartwatch is not a medical device as defined by law. In the UK, medical devices are strictly regulated in a way that other devices such as smartwatches are not. These regulations provide users with better legal protections and clarity as well as providing for resolution in the event of a mishap.</p>
<h2>What qualifies</h2>
<p>The key legal framework in the UK is <a href="https://www.legislation.gov.uk/uksi/2002/618/regulation/2/made" target="_blank" rel="noopener">the Medical Devices Regulations 2002 (UK MDR)</a>. Once a product has been identified as a medical device under UK MDR, further classification of it takes place, ranging from low risk (stethoscopes and wheelchairs) to high risk (pacemakers, heart valves, implanted cerebral simulators).</p>
<p>If a device is designed to go inside the body, or if it contains medicinal substances, it is more likely it is treated as high risk. Depending on the risk classification, the law then imposes stringent standards to protect users from harm. These include obligations on the manufacturers and developers to ensure their devices are safe, through conducting risk impact assessments, periodic audits and other actions.</p>
<p>All matters relating to medical devices in the UK <a href="https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency" target="_blank" rel="noopener">fall under the responsibility</a> of the Medicines and Healthcare Products Regulatory Agency (MHRA). The MHRA conducts surveillance of medical devices available in the UK and has the authority to make decisions regarding their marketing and distribution. It is also the MHRA’s duty to ensure that manufacturers and developers are complying with the regulations.</p>
<h2>Pursuit of wellness?</h2>
<p>An important question is how one distinguishes a device, digital tool or app as one used for a medical purpose – which is how the UK MDR defines a medical device – versus one that is used for general health and wellness. The latter would include, for example, meditation apps or step counters.</p>
<p>Traditionally, <a href="https://www.akingate.com/the-internet-of-things-guide/">smart watches</a> have been <a href="https://www.insiderintelligence.com/insights/wearable-technology-healthcare-medical-devices/" target="_blank" rel="noopener">treated as smart, wearable technology</a>. On the face of it, they offer users insight into their general health and wellness, helping them make necessary lifestyle adjustments to improve their health or fitness goals.</p>
<p>In recent years, however, such technologies have become increasingly advanced. Tens of thousands of digital tools and applications have flooded app stores. These include monitoring apps for mental health, symptom checkers based on information entered by patient users, or medical calculators for drug dosing.</p>
<p>Smartwatches may have <a href="https://support.apple.com/en-us/HT208955" target="_blank" rel="noopener">electrocardiogram (ECG) functions</a>. An ECG is a test used to check a person’s heart’s rhythm and electrical activity. Medical professionals have traditionally used ECGs to look for signs of coronary heart disease or other cardiovascular conditions. The same functions on a watch may not have the right sensitivity to pick up on medical conditions.</p>
<p>The latest version of the <a href="https://www.apple.com/healthcare/docs/site/Apple_Watch_Arrhythmia_Detection.pdf" target="_blank" rel="noopener">Apple watch has embedded sensors</a> that may be able to <a href="https://www.nhs.uk/conditions/atrial-fibrillation/" target="_blank" rel="noopener">detect atrial fibrillation</a>, a type of irregular heart rhythm. In the US, <a href="https://www.apple.com/newsroom/2022/06/watchos-9-delivers-new-ways-to-stay-connected-active-and-healthy/" target="_blank" rel="noopener">Apple has obtained clearance</a> from the Food and Drug Administration (FDA) <a href="https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K213971" target="_blank" rel="noopener">allowing it to be used</a> for this purpose, marking a bold move into the regulated medicine and healthcare space.</p>
<p>Biosensors, previously thought of as devices that were <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986445/" target="_blank" rel="noopener">administered only in clinical settings</a> have now evolved by design into slim patches for consumer use. Take the <a href="https://nixbiosensors.com/" target="_blank" rel="noopener">Nix Biosensor device</a>. When paired with Apple Watches, it is designed to measure a user’s optimal <a href="https://www.theverge.com/23582865/nix-hydration-biosensor-review-wearables-hydration" target="_blank" rel="noopener">hydration level</a> in real time by identifying molecular markers in sweat and determining the loss of fluid and electrolytes (substances that maintain a balance of fluids inside and outside cells).</p>
<p>Finally, emerging trends also indicate that more and more women are relying on fertility and cycle trackers in smartwatches and sophisticated apps. However, there have been concerns that users might use the information <a href="https://www.wired.com/story/apple-watch-fertility-features-not-birth-control/" target="_blank" rel="noopener">in place of actual birth control</a>.</p>
<p>Hence, as smartwatches and trackers evolve, it’s possible that they may approach the threshold for what authorities could consider a medical device.</p>
<h2>Privacy protections</h2>
<p>There’s something else to consider too. Users of devices and digital tools regularly hand over their personal data. Businesses must ensure compliance with the <a href="https://ico.org.uk/for-organisations/data-protection-and-the-eu/data-protection-and-the-eu-in-detail/the-uk-gdpr/" target="_blank" rel="noopener">UK General Data Protection Regulation (UK GDPR)</a> and the <a href="https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted" target="_blank" rel="noopener">Data Protection Act 2018 (DPA)</a>.</p>
<p>Personal health data is a “special category of data”. This would fall under the application of Articles 6 and 9 of the UK GDPR and Schedule 1 of the DPA. This means that more stringent standards are imposed for the collection and use of such data (in its processing), including potentially an obligation to conduct an extensive data impact assessment.</p>
<p>Indeed, the UK’s privacy watchdog, the Information Commissioner’s Office (ICO) <a href="https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2024/02/ico-urges-all-app-developers-to-prioritise-privacy/" target="_blank" rel="noopener">issued a statement</a> on February 8 2024 reminding all app developers to ensure they protect users’ privacy following the regulator’s review of period and fertility apps.</p>
<p>Other potential safeguards for users’ privacy could come from the <a href="https://www.legislation.gov.uk/ukpga/2021/3/contents" target="_blank" rel="noopener">Medicines and Medical Devices Act 2021 (MMDA)</a>, from the appointment of the <a href="https://www.patientsafetycommissioner.org.uk/" target="_blank" rel="noopener">Patient Safety Commissioner</a> and from the National Health Service (NHS), which can now evaluate digital tools using the <a href="https://transform.england.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/" target="_blank" rel="noopener">digital technology assessment criteria (DTAC)</a>.</p>
<p>Clear guidelines in this area are not just necessary, they’re imperative. Without them, we potentially risk both stifling innovation and compromising user care.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" loading="lazy" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/223995/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Author: <a href="https://theconversation.com/profiles/pin-lean-lau-1282877" target="_blank" rel="noopener">Pin Lean Lau</a>, Senior Lecturer (Associate Professor) in Bio-Law, <em><a href="https://theconversation.com/institutions/brunel-university-london-1685" target="_blank" rel="noopener">Brunel University London</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
<p>&nbsp;</p>
<p><strong>Image Credits:</strong> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ekg-monitor-intra-aortic-balloon-pump-1936321450" target="_blank" rel="noopener">Pitchyfoto/Shutterstock</a></span></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/your-smart-watch-isnt-a-medical-device-but-it-is-tracking-all-your-health-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5728</post-id>	</item>
		<item>
		<title>Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias</title>
		<link>https://www.akingate.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datasets-to-avoid-bias/</link>
					<comments>https://www.akingate.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datasets-to-avoid-bias/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Sat, 17 Feb 2024 16:45:14 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Bias]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[datasets]]></category>
		<category><![CDATA[Diversity]]></category>
		<category><![CDATA[Large language models]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5714</guid>

					<description><![CDATA[Large language models (LLMs) are deep learning artificial intelligence programs, like OpenAI’s ChatGPT. The capabilities of LLMs have developed into quite a wide range, from writing fluent essays, through coding to creative writing. Millions of people worldwide use LLMs, and [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Large language models (LLMs) are deep learning artificial intelligence programs, like OpenAI’s ChatGPT. The capabilities of LLMs have developed into quite a wide range, from <a href="https://www.techradar.com/news/i-had-chatgpt-write-my-college-essay-and-now-im-ready-to-go-back-to-school-and-do-nothing" target="_blank" rel="noopener">writing fluent essays</a>, through coding to creative writing.</p>
<blockquote><p><a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/" target="_blank" rel="noopener">Millions of people worldwide use LLMs</a>, and it would not be an exaggeration to say these technologies are transforming work, education and society.</p></blockquote>
<p>LLMs are trained by reading massive amounts of texts and learning to recognize and mimic patterns in the data. This allows them to generate coherent and human-like text on virtually any topic.</p>
<p>Because the internet is still predominantly English — <a href="https://www.statista.com/statistics/262946/most-common-languages-on-the-internet/" target="_blank" rel="noopener">59 per cent of all websites were in English as of January 2023</a> — LLMs are primarily trained on English text. In addition, the vast majority of the English text online comes from users based in the United States, home to <a href="https://www.census.gov/library/publications/2022/acs/acs-50.html" target="_blank" rel="noopener">300 million English speakers</a>.</p>
<p>Learning about the world from English texts written by U.S.-based web users, LLMs speak <a href="https://www.pbs.org/speak/seatosea/standardamerican/" target="_blank" rel="noopener">Standard American English</a> and have a narrow western, North American, or even U.S.-centric, lens.</p>
<h2>Model bias</h2>
<p>In 2023, ChatGPT, upon learning about a couple dining in a restaurant in Madrid and tipping four per cent, <a href="https://chat.openai.com/share/2969f35f-8ee2-4bc0-a8a7-c44a7078037e" target="_blank" rel="noopener">suggested they were frugal, on a tight budget or didn’t like the service</a>. By default, ChatGPT followed the North American standard of a 15 to 25 per cent tip, <a href="https://www.tripsavvy.com/should-you-tip-in-spain-1644349" target="_blank" rel="noopener">ignoring the Spanish norm not to tip</a>.</p>
<p>As of early 2024, ChatGPT correctly cites cultural differences when prompted to judge the appropriateness of a tip. It’s unclear if this capability emerged from training a newer version of the model on more data — after all, the web is full of tipping guides in English — or whether OpenAI patched this particular behaviour.</p>
<figure class="align-center zoomable"><a href="https://i0.wp.com/images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ssl=1" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/574868/original/file-20240212-29-mz6yzd.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="a screen showing text about ChatGPT Optimizing Language Models for Dialogue" /></a><figcaption><span class="caption">Using data from English-language websites, which are predominantly U.S.-based, informs how LLMs respond to prompts.</span><br />
<span class="attribution"><span class="source">(Unsplash/Jonathen Kemper)</span></span></figcaption></figure>
<p>Still, other examples remain that uncover ChatGPT’s implicit cultural assumptions. For example, prompted with a story about guests showing up for dinner at 8:30 p.m., it suggested <a href="https://chat.openai.com/share/3c8db9c7-7c37-4d45-80b2-a891c46fc4fd" target="_blank" rel="noopener">reasons that the guests were late</a>, although the time of the invitation was not mentioned. Again, ChatGPT likely assumed they were invited for a standard North American 6 p.m. dinner.</p>
<p>In May 2023, researchers from the University of Copenhagen <a href="https://doi.org/10.18653/v1/2023.c3nlp-1.7" target="_blank" rel="noopener">quantified this effect</a> by prompting LLMs with the <a href="https://www.hofstede-insights.com/country-comparison-tool" target="_blank" rel="noopener">Hofstede Culture Survey</a>, which measures human values in different countries. Shortly after, researchers from <a href="https://llmglobalvalues.anthropic.com/" target="_blank" rel="noopener">AI start-up company Anthropic</a> used the <a href="https://www.worldvaluessurvey.org/wvs.jsp" target="_blank" rel="noopener">World Values Survey</a> to do the same. Both works concluded that LLMs exhibit strong alignment with American culture.</p>
<p>A similar phenomenon is encountered when asking <a href="https://openai.com/dall-e-3" target="_blank" rel="noopener">DALL-E 3</a>, an image generation model trained on pairs of images and their captions, to generate an image of a breakfast. This model, which was trained on mainly images from Western countries, generated images of pancakes, bacon and eggs.</p>
<h2>Impacts of bias</h2>
<p>Culture plays a significant role in shaping our communication styles and worldviews. Just like <a href="https://erinmeyer.com/books/the-culture-map/" target="_blank" rel="noopener">cross-cultural human interactions can lead to miscommunications</a>, users from diverse cultures that are interacting with conversational AI tools may feel misunderstood and experience them as less useful.</p>
<p>To be better understood by AI tools, users may adapt their communication styles in a manner similar to how people learned to “Americanize” their foreign accents in order to operate <a href="https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand-your-accent/" target="_blank" rel="noopener">personal assistants like Siri and Alexa</a>.</p>
<p>As more people rely on LLMs for editing writing, they are likely to <a href="https://theconversation.com/chatgpt-threatens-language-diversity-more-needs-to-be-done-to-protect-our-differences-in-the-age-of-ai-198878" target="_blank" rel="noopener">unify how we write</a>. Over time, LLMs run the risk of erasing cultural differences.</p>
<h2>Decision-making and AI</h2>
<p>AI is already in use as the backbone of various applications that make decisions affecting people’s lives, such as <a href="https://www.reuters.com/legal/tutoring-firm-settles-us-agencys-first-bias-lawsuit-involving-ai-software-2023-08-10/" target="_blank" rel="noopener">resume filtering</a>, <a href="https://www.open-communities.org/post/press-release-open-communities-reaches-accord-in-case-addressing-artificial-intelligence-communicat" target="_blank" rel="noopener">rental applications</a> and <a href="https://www.theguardian.com/technology/2023/oct/23/uk-officials-use-ai-to-decide-on-issues-from-benefits-to-marriage-licences" target="_blank" rel="noopener">social benefits applications</a>.</p>
<p>For years, <a href="https://www.penguinrandomhouse.com/books/241363/weapons-of-math-destruction-by-cathy-oneil/" target="_blank" rel="noopener">AI researchers have been warning</a> that these models learn not only “good” statistical associations — such as considering experience as a desired property for a job candidate — but also “bad” statistical associations, such as considering <a href="https://www.reuters.com/article/idUSKCN1MK0AG/" target="_blank" rel="noopener">women as less qualified for tech positions</a>.</p>
<p>As LLMs are increasingly used for automating such processes, one can imagine that the North American bias learned by these models can result in discrimination against people from diverse cultures. Lack of cultural awareness may lead to AI perpetuating stereotypes and reinforcing societal inequalities.</p>
<h2>LLMs for languages other than English</h2>
<p>Developing LLMs for languages other than English is an <a href="https://txt.cohere.com/aya-multilingual/" target="_blank" rel="noopener">important effort</a>, and many such models exist. However, there are several reasons why this should be done in parallel to improving LLMs’ cultural awareness and sensitivity.</p>
<p>First, there is a huge population of English speakers outside of North America who are not represented by English LLMs. The same argument holds for other languages. A French language model would be representative of the culture in France more than the culture in other Francophone regions.</p>
<p>Training LLMs for regional dialects — which <a href="https://doi.org/10.1016/j.jue.2012.05.007" target="_blank" rel="noopener">may capture finer-grained cultural differences</a> — is not a feasible solution either. The quality of LLMs is based on the amount of data available, and as such, their quality would be worse for dialects with little online data.</p>
<p>Second, many users whose native language is not English still choose to use English LLMs. Significant breakthroughs in language technologies tend to <a href="https://doi.org/10.18653/v1/2022.emnlp-main.351" target="_blank" rel="noopener">start with English before they are applied to other languages</a>. Even then, many languages — such as Welsh, Swahili and Bengali — don’t have enough text online to train high quality models.</p>
<p>Due to either a lack of availability of LLMs in their native languages, or superior quality of the English LLMs, users from diverse countries and backgrounds may prefer to use English LLMs.</p>
<h2>Ways forward</h2>
<p>Our research group at the University of British Columbia is working on enhancing LLMs with culturally diverse knowledge. Together with graduate student <a href="https://meharbhatia.github.io/" target="_blank" rel="noopener">Mehar Bhatia</a>, we <a href="https://doi.org/10.18653/v1/2023.emnlp-main.496" target="_blank" rel="noopener">trained an AI model</a> on a <a href="https://doi.org/10.1145/3543507.3583535" target="_blank" rel="noopener">collection of facts about traditions and concepts in diverse cultures</a>.</p>
<p>Before reading these facts, the AI suggested that a person eating a dutch baby (a type of German pancake) is “disgusting and mean,” and would feel guilty. After training, it said the person feels “full and satisfied.”</p>
<figure class="align-center zoomable"><a href="https://i0.wp.com/images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ssl=1" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/574866/original/file-20240212-21-lmr4xk.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="a pancake covered in berries" /></a><figcaption><span class="caption">Teaching an AI that a dutch baby was a dish changed its response to learning that someone had consumed one.</span><br />
<span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure>
<p>We are currently collecting a large scale image captioning dataset with images from 60 cultures, which will help models learn, for instance, about types of breakfasts other than bacon and eggs. Our future research will go beyond teaching models about the existence of culturally diverse concepts to better understand how people interpret the world through the lens of their cultures.</p>
<p>With AI tools becoming increasingly ubiquitous in society, it is imperative that they go beyond the dominating western and North American perspectives. Businesses and organizations throughout many sectors of the economy are adopting AI to automate manual processes and make better evidence-informed decisions using data. Making such tools more inclusive is crucial for the diverse population of Canada.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" loading="lazy" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/222811/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Author: <a href="https://theconversation.com/profiles/vered-shwartz-1509186" target="_blank" rel="noopener">Vered Shwartz</a>, Assistant Professor, Computer science, <em><a href="https://theconversation.com/institutions/university-of-british-columbia-946" target="_blank" rel="noopener">University of British Columbia</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
<p>&nbsp;</p>
<p><strong>Image Credits:</strong> There is a growing need to address diversity in the datasets used to train artificial intelligence. <span class="attribution"><span class="source">(Shutterstock)</span></span></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datasets-to-avoid-bias/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5714</post-id>	</item>
		<item>
		<title>Leading the Way in Sustainability: Practical Steps for Businesses to Lead in Environmental Conservation Technologies</title>
		<link>https://www.akingate.com/leading-the-way-in-sustainability-practical-steps-for-businesses-to-lead-in-environmental-conservation-technologies/</link>
					<comments>https://www.akingate.com/leading-the-way-in-sustainability-practical-steps-for-businesses-to-lead-in-environmental-conservation-technologies/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Tue, 06 Feb 2024 19:43:35 +0000</pubDate>
				<category><![CDATA[Energy and power]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Business Sustainability]]></category>
		<category><![CDATA[Clean Tech]]></category>
		<category><![CDATA[Climate Action]]></category>
		<category><![CDATA[Corporate Responsibility]]></category>
		<category><![CDATA[Eco Friendly]]></category>
		<category><![CDATA[Environmental Innovation]]></category>
		<category><![CDATA[Future Of Business]]></category>
		<category><![CDATA[Green Innovation]]></category>
		<category><![CDATA[Green Tech]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Sustainability]]></category>
		<category><![CDATA[Sustainable Solutions]]></category>
		<category><![CDATA[Tech For Good]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5686</guid>

					<description><![CDATA[In an era of increasing environmental concerns, businesses worldwide recognise the importance of integrating environmental conservation technologies. As our planet faces pressing challenges like climate change and resource depletion, adopting sustainable practices is a moral obligation and a strategic advantage. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>In an era of increasing environmental concerns, businesses worldwide recognise the importance of integrating environmental conservation technologies. As our planet faces pressing challenges like climate change and resource depletion, adopting sustainable practices is a moral obligation and a strategic advantage. This article will delve into environmental conservation technologies and outline practical steps businesses can take to position themselves at the forefront of this transformative movement.</p>
<p>&nbsp;</p>
<h3><strong>Understanding Environmental Conservation Technologies</strong></h3>
<p>Environmental conservation technologies encompass a broad spectrum of innovative solutions to mitigate environmental impact, reduce waste, conserve natural resources, and promote sustainable practices. These technologies can be applied across various sectors, including energy, agriculture, manufacturing, transportation, and construction, to name a few. Here are some key areas where businesses can make a significant impact:</p>
<p>&nbsp;</p>
<h4><strong>Renewable Energy:</strong></h4>
<p>Transitioning to clean and <a href="https://www.akingate.com/innovative-renewable-energy-powering-the-world/" target="_blank" rel="noopener">renewable energy</a> sources like solar, wind, and hydropower can reduce greenhouse gas emissions and reliance on fossil fuels. Installing solar panels, wind turbines, and energy-efficient systems can help businesses reduce their carbon footprint.</p>
<h4><strong>Waste Reduction and Recycling:</strong></h4>
<p>Implementing efficient waste management practices and promoting <a href="https://amzn.to/3UudAys" target="_blank" rel="noopener">recycling</a> within your organisation can significantly reduce landfill waste and save valuable resources. Consider reusing materials, reducing packaging, and recycling programs.</p>
<h4><strong>Sustainable Supply Chain:</strong></h4>
<p>Collaborating with suppliers, prioritising sustainability, and responsible sourcing are crucial. Tracking and reducing the carbon footprint of your supply chain can enhance your overall sustainability efforts.</p>
<h4><strong>Green Building Technologies:</strong></h4>
<p>Incorporating energy-efficient <a href="https://amzn.to/3OWl367" target="_blank" rel="noopener">building designs</a>, materials, and technologies can lower energy consumption and operational costs. This is especially relevant for companies in the construction and real estate sectors.</p>
<h4><strong>Water Conservation:</strong></h4>
<p>Efficient water use and wastewater treatment technologies can help conserve this precious resource. Installing water-saving fixtures and recycling water in manufacturing processes can be beneficial.</p>
<h4><strong>Transportation Solutions:</strong></h4>
<p>Reducing emissions from company vehicles or offering incentives for employees to use public transportation or carpooling are ways to contribute to cleaner air and reduced congestion.</p>
<p>&nbsp;</p>
<h3><strong>Practical Steps for Businesses to Lead in Environmental Conservation Technologies</strong></h3>
<h4><strong>Conduct an Environmental Audit:</strong></h4>
<p>Assess your current environmental impact by conducting a comprehensive environmental audit. This will help identify areas where improvements can be made.</p>
<p>Set clear sustainability goals and targets for your organisation. Make them measurable, time-bound, and aligned with global sustainability initiatives like the United Nations Sustainable Development Goals (SDGs).</p>
<h4><strong>Embrace Renewable Energy:</strong></h4>
<p>Invest in on-site renewable energy sources or purchase Renewable Energy Certificates (RECs) to offset your energy consumption from non-renewable sources.</p>
<p>Implement energy-efficient technologies, such as LED lighting and smart HVAC systems, to reduce energy consumption.</p>
<h4><strong>Sustainable Procurement:</strong></h4>
<p>Collaborate with suppliers who share your commitment to sustainability—Prioritise suppliers with responsible sourcing practices and eco-friendly products.</p>
<p>Consider circular economy principles, such as product design for longevity and ease of recycling.</p>
<h4><strong>Waste Management and Recycling:</strong></h4>
<p>Implement waste reduction strategies within your organisation. Encourage employees to reduce, reuse, and recycle.</p>
<p>Partner with recycling companies to ensure proper disposal of electronic waste (e-waste) and hazardous materials.</p>
<h4><strong>Employee Engagement:</strong></h4>
<p>Foster a culture of sustainability within your workforce. Educate employees about the importance of environmental conservation and encourage their active participation.</p>
<p>Offer incentives for sustainable commuting, such as public transportation subsidies or telecommuting options.</p>
<h4><strong>Technology Adoption:</strong></h4>
<p>Embrace cutting-edge environmental conservation technologies that align with your industry. This may include investing in energy-efficient machinery, adopting IoT-based ecological monitoring systems, or utilising predictive analytics for resource management.</p>
<h4><strong>Publicise Your Commitment:</strong></h4>
<p>Communicate your sustainability efforts transparently to customers, investors, and the public. Highlight your achievements and progress towards sustainability goals.</p>
<p>Showcase eco-friendly certifications, such as LEED (Leadership in Energy and Environmental Design) or B Corp certification, if applicable.</p>
<h4><strong>Collaborate and Innovate:</strong></h4>
<p>Seek partnerships and collaborations with other businesses, research institutions, and government agencies to share knowledge and drive innovation in environmental conservation technologies.</p>
<p>Explore emerging technologies like carbon capture and utilisation, sustainable biofuels, and advanced recycling processes.</p>
<p>&nbsp;</p>
<h4><strong>Conclusion</strong></h4>
<p>Environmental conservation technologies are no longer just a choice but a necessity for businesses looking to thrive in a world increasingly focused on sustainability. By adopting these technologies and implementing practical steps, companies can reduce their environmental impact, cut operational costs, and position themselves as leaders in the transition toward a more sustainable future. Embracing these innovations benefits the environment, enhances a company&#8217;s reputation, attracts environmentally-conscious customers, and ensures long-term success in an eco-friendly world.</p>
<p>&nbsp;</p>
<p style="text-align: center;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</p>
<p>Image Credit: Image by <a href="https://www.freepik.com/free-photo/view-bioengineering-advance-tech_57314166.htm#query=environment%20renewable&amp;position=41&amp;from_view=search&amp;track=ais&amp;uuid=c9bbfea9-01e8-41b2-b33e-91facf25dd72" target="_blank" rel="noopener">Freepik</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/leading-the-way-in-sustainability-practical-steps-for-businesses-to-lead-in-environmental-conservation-technologies/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5686</post-id>	</item>
		<item>
		<title>Editing memories, spying on our bodies, normalising weird goggles: Apple’s new Vision Pro has big ambitions</title>
		<link>https://www.akingate.com/editing-memories-spying-on-our-bodies-normalising-weird-goggles-apples-new-vision-pro-has-big-ambitions/</link>
					<comments>https://www.akingate.com/editing-memories-spying-on-our-bodies-normalising-weird-goggles-apples-new-vision-pro-has-big-ambitions/#comments</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Wed, 31 Jan 2024 21:29:21 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Gadgets]]></category>
		<category><![CDATA[Apple]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[iPhones]]></category>
		<category><![CDATA[Surveillance capitalism]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Virtual reality]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5678</guid>

					<description><![CDATA[Apple Vision Pro is a mixed-reality headset – which the company hopes is a “revolutionary spatial computer that transforms how people work, collaborate, connect, relive memories, and enjoy entertainment” – that begins shipping to the public (in the United States) [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Apple Vision Pro is a mixed-reality headset – which the company hopes is a “<a href="https://www.apple.com/newsroom/2024/01/apple-vision-pro-available-in-the-us-on-february-2/" target="_blank" rel="noopener">revolutionary spatial computer</a> that transforms how people work, collaborate, connect, relive memories, and enjoy entertainment” – that begins shipping to the public (in the United States) later this week.</p>
<p>Critics have <a href="https://www.wired.com/story/apple-vision-pro-doomed/" target="_blank" rel="noopener">doubted the appeal</a> of the face-worn computer, which “seamlessly blends digital content with the physical world”, but Apple has pre-sold <a href="https://www.engadget.com/apple-might-have-sold-up-to-180000-vision-pro-headsets-over-pre-order-weekend-081727344.html" target="_blank" rel="noopener">as many as 180,000</a> of the US$3,500 gizmos.</p>
<p>What does Apple think people will do with these pricey peripherals? While uses will evolve, Apple is focusing attention on watching TV and movies, editing and reliving “memories”, and – perhaps most importantly for the product’s success – having its customers not look like total weirdos.</p>
<p>Apple hopes the new device will redefine personal computing, like the iPhone did 16 years ago, and Macintosh did 40 years ago. But if it succeeds, it will also redefine concerns about privacy, as it captures enormous amounts of data about users and their environments, creating an unprecedented kind of “<a href="https://journals.sagepub.com/doi/abs/10.1177/1354856521989514" target="_blank" rel="noopener">biospatial surveillance</a>”.</p>
<h2>Spatial computing</h2>
<p>Apple is careful about its brand and how it packages and describes its products. In an extensive set of <a href="https://developer.apple.com/visionos/submit/#:%7E:text=Don%27t%20refer%20to%20Apple,first%20word%20in%20a%20sentence." target="_blank" rel="noopener">rules for developers</a>, the company insists the new headset is not to be referred to as a “headset”. What’s more, the Apple Vision Pro does not do “augmented reality (AR), virtual reality (VR), extended reality (XR), or mixed reality (MR)” – it is a gateway to “spatial computing”.</p>
<p>Spatial computing, as sketched out in the <a href="https://acg.media.mit.edu/people/simong/thesis/SpatialComputing.pdf" target="_blank" rel="noopener">2003 PhD thesis</a> of US software engineer Simon Greenwold, is: “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces”. In other words, the computer can interact with things in the user’s physical surroundings in real time to provide new types of experiences.</p>
<figure class="align-center zoomable"><a href="https://i0.wp.com/images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ssl=1" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=412&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=412&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/571805/original/file-20240129-25-4y9k16.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=412&amp;fit=crop&amp;dpr=3 2262w" alt="A CGI dinosaur stands on a rocky field." /></a><figcaption><span class="caption">The Vision Pro comes with an app that lets users get up close and personal with dinosaurs.</span><br />
<span class="attribution"><a class="source" href="https://www.apple.com/tv-pr/news/2024/01/apple-tv-unveils-groundbreaking-immersive-originals-from-todays-biggest-storytellers-set-to-debut-on-apple-vision-pro/" target="_blank" rel="noopener">Apple</a></span></figcaption></figure>
<p>The Vision Pro has big shoes to fill for new user experiences. The iPhone’s initial “killer apps” were <a href="https://www.macworld.com/article/183052/liveupdate-15.html" target="_blank" rel="noopener">clear</a>: the internet in your pocket (including portable access to Google Maps), all your music on a touch screen, and “<a href="https://www.youtube.com/watch?v=c3j03bOOBwY" target="_blank" rel="noopener">visual voicemail</a>”.</p>
<p>Sixteen years later, all three of these seem unremarkable. Apple has sold billions of iPhones, and some <a href="https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/" target="_blank" rel="noopener">80% of humans</a> now use a smartphone. Their success has all but killed off earlier tools like paper maps and music CDs (and the ubiquity of text, image and video messaging has largely done away with voicemail itself).</p>
<h2>Killer apps</h2>
<p>We don’t yet know what the killer apps of spatial computing might be – if any – but here is where Apple is pointing our attention.</p>
<p>The first is entertainment: the Vision Pro promises “<a href="https://www.apple.com/newsroom/2024/01/apple-previews-new-entertainment-experiences-launching-with-apple-vision-pro/" target="_blank" rel="noopener">the ultimate personal theatre</a>”.</p>
<p>The second is an attempt to solve the social problem of walking around with a weird headset covering half your face. An external screen on the goggles shows a constantly updated representation of your eyes to <a href="https://cavrn.org/the-identity-emotion-and-gaze-behind-apples-vision-pro/" target="_blank" rel="noopener">offer important social cues about your gaze</a> to those around you. Admittedly, this looks weird. But Apple hopes it is less weird and more useful than trying to interact with humans wearing blank aluminium ski goggles.</p>
<figure class="align-center zoomable"><a href="https://i0.wp.com/images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ssl=1" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=325&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=325&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=325&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=408&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=408&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/571806/original/file-20240129-27-kmnd7y.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=408&amp;fit=crop&amp;dpr=3 2262w" alt="A man sitting on a couch wearing a headset while an image of children playing floats in the air in front of him." /></a><figcaption><span class="caption">Reliving ‘memories’ with the Apple Vision Pro.</span><br />
<span class="attribution"><a class="source" href="https://www.apple.com/apple-vision-pro/" target="_blank" rel="noopener">Apple</a></span></figcaption></figure>
<p>The third is the ability to capture and and relive “memories”: recording and playback of 3D visual and audio from real events. Reviewers have found it striking:</p>
<blockquote><p>this was <a href="https://www.cnet.com/tech/computing/i-saw-my-iphone-spatial-movies-in-apple-vision-pro/" target="_blank" rel="noopener">stuff from my own life</a>, my own memories. I was playing back experiences I had already lived.</p></blockquote>
<p>Apple has <a href="https://www.patentlyapple.com/2023/10/a-new-vision-pro-patent-describes-its-3d-camera-allowing-users-to-relive-memories-add-notes-commentary-about-that-moment.html" target="_blank" rel="noopener">patented</a> tools to select, store, and annotate digital “memories”. These memories are files, and potentially products, to be shared in “spatial videos” <a href="https://www.apple.com/au/newsroom/2023/12/apple-introduces-spatial-video-capture-on-iphone-15-pro/" target="_blank" rel="noopener">recorded on the latest iPhones</a>.</p>
<h2>Biospatial surveillance</h2>
<p>There is already a large infrastructure devoted to helping tech companies track our behaviour in order to sell us things. Recent <a href="https://www.consumerreports.org/electronics/privacy/each-facebook-user-is-monitored-by-thousands-of-companies-a5824207467/" target="_blank" rel="noopener">research</a> found Facebook, for example, receives data from an average of around 2,300 companies on each individual user.</p>
<p>Spatial computing offers a step change to this tracking. In order to function, spatial computing records and uses vast amounts of intimate data about our bodies and surroundings.</p>
<p>One <a href="https://www.slideshare.net/kentbye/towards-a-framework-for-xr-ethics-kent-bye-awe-november-11-2021" target="_blank" rel="noopener">study on headset design</a> noted no fewer than 64 different streams of biometric and physiological data, from eye tracking and pupil response to subtle changes in the body’s electromagnetic field.</p>
<h2>Your face tomorrow</h2>
<p>This is not “consumer” data like the brand of toothpaste you buy. It is more akin to medical data.</p>
<p>For instance, <a href="http://www.mkhamis.com/data/papers/abraham2022nordichi.pdf" target="_blank" rel="noopener">analysing a person’s unconscious movements</a> can reveal their emotional state or even predict neurodegenerative disease. This is called “<a href="https://xrsi.org/definition/biometrically-inferred-data-bid" target="_blank" rel="noopener">biometrically inferred data</a>” as users are unaware their bodies are giving it up.</p>
<p>Apple suggests it won’t share this type of data with anyone, and Apple has proven better than most companies on privacy. But biospatial surveillance puts more of ourselves in use for spatial computing, in ways that are expanding.</p>
<p>It starts simply enough in the pre-order process, where you need to scan your facial features with your iPhone (to ensure a snug fit). But that’s not the end of it.</p>
<p>Apple’s <a href="https://patents.google.com/patent/WO2023196257A1/en?oq=WO2023196257" target="_blank" rel="noopener">patent about memories</a> is also about how to “guide and direct a user with attention, memory, and cognition” through feedback loops that monitor “facial recognition, eye tracking, user mood detection, user emotion detection, voice detection, etc. [from a] bio-sensor for tracking biometric characteristics, such as health and activity metrics […] and other health-related information”.</p>
<h2>Social questions</h2>
<p>Biospatial surveillance is also the key to Apple’s attempt to solve the social problems created by wearing a headset in public. The external screen showing a simulated approximation of the user’s gaze relies on constant measurement of the user’s expression and eye movement with multiple sensors.</p>
<figure class="align-center zoomable"><a href="https://i0.wp.com/images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ssl=1" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=312&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=312&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=312&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=393&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=393&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/571865/original/file-20240129-21-5qnrow.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=393&amp;fit=crop&amp;dpr=3 2262w" alt="A man wearing goggles with a screen that shows his eyess" /></a><figcaption><span class="caption">An external screen shows a representation of the user’s eyes.</span><br />
<span class="attribution"><a class="source" href="https://youtu.be/IY4x85zqoJM?feature=shared&amp;t=57" target="_blank" rel="noopener">Apple</a></span></figcaption></figure>
<p>Your face is constantly mapped so others can see it – or rather see Apple’s vision of it. Likewise, as passersby come into range of the Apple Vision Pro’s sensors, Apple’s vision of them is automagically rendered into your experience, whether they like it or not.</p>
<p>Apple’s new vision of us – and those that surround us – shows how the requirements and benefits of spatial computing will pose new privacy concerns and social questions. The extensive biospatial surveillance that captures intimate biometric and environmental data redefines what personal data and social interactions are possible for exploitation.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" loading="lazy" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/221910/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p><a href="https://theconversation.com/profiles/luke-heemsbergen-1554" target="_blank" rel="noopener">Luke Heemsbergen</a>, Senior Lecturer, Digital, Political, Media, <em><a href="https://theconversation.com/institutions/deakin-university-757" target="_blank" rel="noopener">Deakin University</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/editing-memories-spying-on-our-bodies-normalising-weird-goggles-apples-new-vision-pro-has-big-ambitions/feed/</wfw:commentRss>
			<slash:comments>9</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5678</post-id>	</item>
		<item>
		<title>Transhumanism: billionaires want to use tech to enhance our abilities – the outcomes could change what it means to be human</title>
		<link>https://www.akingate.com/transhumanism-billionaires-want-to-use-tech-to-enhance-our-abilities-the-outcomes-could-change-what-it-means-to-be-human/</link>
					<comments>https://www.akingate.com/transhumanism-billionaires-want-to-use-tech-to-enhance-our-abilities-the-outcomes-could-change-what-it-means-to-be-human/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Thu, 18 Jan 2024 20:36:12 +0000</pubDate>
				<category><![CDATA[Engineering]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Educate me]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Transhumanism]]></category>
		<category><![CDATA[Ultimate Guide to Plastic Surgery]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5637</guid>

					<description><![CDATA[Many prominent people in the tech industry have talked about the increasing convergence between humans and machines in coming decades. For example, Elon Musk has reportedly said he wants humans to merge with AI “to achieve a symbiosis with artificial [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Many prominent people in the tech industry have talked about the increasing convergence between humans and machines in coming decades. For example, Elon Musk has reportedly said he wants humans <a href="https://www.vox.com/future-perfect/2019/7/17/20697812/elon-musk-neuralink-ai-brain-implant-thread-robot" target="_blank" rel="noopener">to merge with AI</a> “to achieve a symbiosis with artificial intelligence”.</p>
<p><a href="https://neuralink.com/" target="_blank" rel="noopener">His company Neuralink</a> aims to facilitate this convergence so that humans won’t be “left behind” as technology advances in the future. While people with disabilities would be near-term recipients of these innovations, some believe technologies like this could be used to enhance abilities in everyone.</p>
<p>These aims are inspired by an idea called transhumanism, the belief that we should use science and technology to radically enhance human capabilities and seek to direct our own evolutionary path. Disease, aging and death are all realities transhumanists wish to end, alongside dramatically increasing our cognitive, emotional and physical capacities.</p>
<p><a href="https://azofthefuture.podbean.com/e/episode-4-transhumanism-part-1/" target="_blank" rel="noopener">Transhumanists</a> often advocate for the three “supers” of superintelligence, superlongevity and superhappiness, the last referring to ways of achieving lasting happiness. There are many different views among the transhumanist community of what our ongoing evolution should look like.</p>
<p>For example, some advocate <a href="https://theconversation.com/how-uploading-our-minds-to-a-computer-might-%20become-possible-206804" target="_blank" rel="noopener">uploading the mind into digital form</a> and <a href="https://nickbostrom.com/astronomical/waste" target="_blank" rel="noopener">settling the cosmos</a>. Others think we should remain organic beings but rewire or upgrade our biology through genetic engineering and other methods. A future of designer babies, artificial wombs and anti-aging therapies appeal to these thinkers.</p>
<p>This may all sound futuristic and fantastical, but rapid developments in artificial intelligence (AI) and synthetic biology have led some to argue we are on the cusp of creating such possibilities.</p>
<h2>God-like role</h2>
<p><a href="https://www.standard.co.uk/news/world/silicon-valley-billionaire-pays-%20company-thousands-to-kill-him-and-preserve-his-brain-forever-%20a3790871.html" target="_blank" rel="noopener">Tech billionaires</a> are among the <a href="https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/" target="_blank" rel="noopener">biggest promoters of transhumanist thinking</a>. It is not hard to understand why: they could be the central protagonists in the most important moment in history.</p>
<p>Creating so-called <a href="https://theconversation.com/uk/topics/artificial-general-intelligence-3286" target="_blank" rel="noopener">artificial general intelligence</a> (AGI) – that is, an AI system that can do all the cognitive tasks a human can do and more – is a current focus within Silicon Valley. AGI is seen as vital to enabling us to take on the God-like role of designing our own evolutionary futures.</p>
<figure class="align-center "><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ssl=1" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/569361/original/file-20240115-21-jr87bd.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w" alt="Anti-aging therapy." /><figcaption><span class="caption">Advanced anti-aging therapies are one area that could deepen inequality.</span><br />
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hyaluronic-acid-injection-facial-rejuvenation-procedure-562280392" target="_blank" rel="noopener">Africa Studio</a></span></figcaption></figure>
<p>That is why companies like OpenAI, DeepMind and Anthropic are racing towards the development of AGI, despite some experts warning that it could <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/" target="_blank" rel="noopener">lead to human extinction</a>.</p>
<p>In the short term, the promises and the perils are probably overstated. After all, these companies have a lot to gain by making us think they are on the verge of engineering a divine power that can create utopia or destroy the world. Meanwhile, AI has played a role in fuelling our polarised political landscape, with disinformation and more complex forms of manipulation made more effective by generative AI.</p>
<p>Indeed, AI systems are already causing <a href="https://www.vice.com/en/article/akex34/chatgpt-is-a-bullshit-generator-%20waging-%20class-war" target="_blank" rel="noopener">many other forms of social and environmental harm</a>. AI companies rarely wish to address these harms though. If they can make governments focus on long-term potential “safety” issues relating to possible existential risks instead of actual social and environmental injustices, they stand to benefit from the resulting regulatory framework.</p>
<p>But if we lack the capacity and determination to address these real world harms, it’s hard to believe that we will be able to mitigate <a href="https://time.com/6327635/ai-needs-to-be-%20regulated-like-nuclear-weapons/" target="_blank" rel="noopener">larger-scale risks that AI may hypothetically enable</a>. If there really is a threat that AGI could pose an existential risk, for example, everyone would shoulder that cost, but the profits would be very much private.</p>
<h2>A familiar story</h2>
<p>This issue within AI development can be seen as a microcosm of why the wider<br />
transhumanist imagination may appeal to billionaire elites <a href="https://theconversation.com/polycrisis-may-be-a-buzzword-but-it-could-help-us-tackle-%20the-worlds-woes-195280" target="_blank" rel="noopener">in an age of multiple crises</a>. It speaks to the refusal to engage in grounded ethics, injustices and challenges and offers a grandiose narrative of a resplendent future to distract from the current moment.</p>
<p>Our misuse of the planet’s resources has set in train a sixth mass extinction of species and a climate crisis. In addition, ongoing wars with increasingly potent weapons remain a part of our technological evolution.</p>
<p>There’s also the pressing question of <a href="https://theconversation.com/super-intelligence-and-eternal-life-transhumanisms-faithful-follow-it-blindly-into-a-future-for-the-elite-78538" target="_blank" rel="noopener">whose future will be transhuman</a>. We currently live in a very unequal world. Transhumanism, if developed in<br />
anything like our existing context, is likely to greatly increase inequality, and<br />
may have catastrophic consequences for the majority of humans.</p>
<p>Perhaps transhumanism itself is a symptom of the kind of thinking that has created our parlous social reality. It is a narrative that encourages us to hit the gas, expropriate nature even more, keep growing and not look back at the devastation in the rear-view mirror.</p>
<p>If we’re really on the verge of creating an enhanced version of humanity, we should start to ask some big questions about what being human should mean, and therefore what an enhancement of humanity should entail.</p>
<p>If the human is an aspiring God, then it lays claim to dominion over nature and the body, making all amenable to its desires. But if the human is an animal embedded in complex relations with other species and nature at large, then “enhancement” is contingent on the health and sustainability of its relations.</p>
<p>If the human is conceived of as an environmental threat, then enhancement is surely that which redirects its exploitative lifeways. Perhaps becoming more-than-human should constitute a much more responsible humanity.</p>
<p>One that shows compassion to and awareness of other forms of life in this rich and wondrous planet. That would be preferable to colonising and extending ourselves, with great hubris, at the expense of everything, and everyone, else.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" loading="lazy" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/220549/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Authors: <a href="https://theconversation.com/profiles/alexander-thomas-1500344" target="_blank" rel="noopener">Alexander Thomas</a>, Programme Leader, Media, Fashion &amp; Communications, <em><a href="https://theconversation.com/institutions/university-of-east-london-924" target="_blank" rel="noopener">University of East London</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
<p>&nbsp;</p>
<p>Image Credit: <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/cyborg-woman-machine-part-her-face-1489006997" target="_blank" rel="noopener">Kotin / Shutterstock</a></span></p>
<p>#ad #commissionsearned</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/transhumanism-billionaires-want-to-use-tech-to-enhance-our-abilities-the-outcomes-could-change-what-it-means-to-be-human/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5637</post-id>	</item>
		<item>
		<title>AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024</title>
		<link>https://www.akingate.com/ai-is-here-and-everywhere-3-ai-researchers-look-to-the-challenges-ahead-in-2024/</link>
					<comments>https://www.akingate.com/ai-is-here-and-everywhere-3-ai-researchers-look-to-the-challenges-ahead-in-2024/#respond</comments>
		
		<dc:creator><![CDATA[Akingate]]></dc:creator>
		<pubDate>Sat, 06 Jan 2024 20:38:11 +0000</pubDate>
				<category><![CDATA[Computing and ICT]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[G-Tech]]></category>
		<category><![CDATA[AI chatbots]]></category>
		<category><![CDATA[AI education]]></category>
		<category><![CDATA[AI regulation]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Deep learning]]></category>
		<category><![CDATA[Deepfakes]]></category>
		<category><![CDATA[Expert panel]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.akingate.com/?p=5581</guid>

					<description><![CDATA[2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also [&#8230;]]]></description>
										<content:encoded><![CDATA[<blockquote><p><em>2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the <a href="https://theconversation.com/generative-ai-5-essential-reads-about-the-new-era-of-creativity-job-anxiety-misinformation-bias-and-plagiarism-203746" target="_blank" rel="noopener">emergence of generative AI</a>, which moved the technology from the shadows to center stage in the public imagination. It also saw <a href="https://theconversation.com/openai-is-a-nonprofit-corporate-hybrid-a-management-expert-explains-how-this-model-works-and-how-it-fueled-the-tumult-around-ceo-sam-altmans-short-lived-ouster-218340" target="_blank" rel="noopener">boardroom drama</a> in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue <a href="https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694" target="_blank" rel="noopener">an executive order</a> and the European Union <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" rel="noopener">pass a law</a> aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.</em></p></blockquote>
<p><em>We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.</em></p>
<hr />
<p><strong>Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder</strong></p>
<p>2023 was the <a href="http://bit.ly/ai-ethics-news" target="_blank" rel="noopener">year of AI hype</a>. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of <a href="https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375" target="_blank" rel="noopener">overcoming ethical debt in tech</a>, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.</p>
<p>One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, <a href="https://cfiesler.medium.com/chatgpt-wrapped-an-ais-year-in-review-dc37252c494f" target="_blank" rel="noopener">most relevant headlines focused on</a> how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that <a href="https://www.thedailybeast.com/ai-written-homework-is-rising-so-are-false-accusations" target="_blank" rel="noopener">often do more harm than good</a>.</p>
<p>However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools <a href="https://www.nytimes.com/2023/08/24/business/schools-chatgpt-chatbot-bans.html" target="_blank" rel="noopener">rescinded their bans</a>. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.</p>
<p>So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, <a href="https://dl.acm.org/doi/10.1145/365153.365168" target="_blank" rel="noopener">wrote that</a> machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.</p>
<p>I think it’s possible to make this happen. I hope that universities that are <a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/05/19/colleges-race-hire-and-build-amid-ai-gold" target="_blank" rel="noopener">rushing to hire more technical AI experts</a> put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.</p>
<figure><iframe loading="lazy" src="https://www.youtube.com/embed/eXdVDhOGqoE?wmode=transparent&amp;start=0" width="440" height="260" frameborder="0" allowfullscreen="allowfullscreen"></iframe><figcaption><span class="caption">Many of the challenges in the year ahead have to do with problems of AI that society is already facing.</span></figcaption></figure>
<hr />
<p><strong>Kentaro Toyama, Professor of Community Information, University of Michigan</strong></p>
<p>In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, <a href="https://books.google.co.jp/books?id=2FMEAAAAMBAJ&amp;pg=PA58" target="_blank" rel="noopener">told Life magazine</a>, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With <a href="https://futurism.com/singularity-explain-it-to-me-like-im-5-years-old" target="_blank" rel="noopener">the singularity</a>, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.</p>
<p>Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public <a href="https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704" target="_blank" rel="noopener">release of ChatGPT in 2022</a> kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.</p>
<p>The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of <a href="https://www.mathworks.com/discovery/deep-learning.html" target="_blank" rel="noopener">deep learning</a> – what might be called <a href="https://medium.com/@kentarotoyama/characterizing-generative-ai-circa-2023-d73a4d334bef" target="_blank" rel="noopener">generalized hard reasoning</a>, things like <a href="https://www.livescience.com/21569-deduction-vs-induction.html" target="_blank" rel="noopener">deductive logic</a>. Will quick tweaks to existing <a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897" target="_blank" rel="noopener">neural-net</a> algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist <a href="https://dblp.org/pid/164/5919.html" target="_blank" rel="noopener">Gary Marcus</a> <a href="https://arxiv.org/ftp/arxiv/papers/2002/2002.06177.pdf" target="_blank" rel="noopener">suggests</a>? Armies of AI scientists are working on this problem, so I expect some headway in 2024.</p>
<p>Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite <a href="https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html" target="_blank" rel="noopener">nascent regulation</a>, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.</p>
<p>Speaking of problems, the very people sounding the loudest alarms about AI – like <a href="https://edition.cnn.com/2023/04/17/tech/elon-musk-ai-warning-tucker-carlson/index.html" target="_blank" rel="noopener">Elon Musk</a> and <a href="https://edition.cnn.com/2023/10/31/tech/sam-altman-ai-risk-taker/index.html" target="_blank" rel="noopener">Sam Altman</a> – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.</p>
<hr />
<p><strong>Anjana Susarla, Professor of Information Systems, Michigan State University</strong></p>
<p>In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to <a href="https://en.wikipedia.org/wiki/ChatGPT" target="_blank" rel="noopener">ChatGPT a year back</a>, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, <a href="https://www.nytimes.com/2023/12/01/podcasts/transcript-ezra-klein-interviews-casey-newton-kevin-roose.html" target="_blank" rel="noopener">but also from videos on YouTube, songs on Spotify</a>, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.</p>
<p>Companies are racing to <a href="https://llm.mlc.ai" target="_blank" rel="noopener">develop LLMs that can be deployed</a> on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these <a href="https://arxiv.org/pdf/2306.02707.pdf" target="_blank" rel="noopener">lightweight LLMs</a> and <a href="https://hai.stanford.edu/sites/default/files/2023-12/Governing-Open-Foundation-Models.pdf" target="_blank" rel="noopener">open source LLMs</a> could usher in a <a href="https://www.oneusefulthing.org/p/an-ai-haunted-world" target="_blank" rel="noopener">world of autonomous AI agents</a> – a world that society is not necessarily prepared for.</p>
<p>These advanced AI capabilities offer immense transformative power in applications ranging from <a href="https://cloud.google.com/blog/products/ai-machine-learning/multimodal-generative-ai-search" target="_blank" rel="noopener">business</a> to <a href="https://doi.org/10.1186/s12909-023-04698-z" target="_blank" rel="noopener">precision medicine</a>. My chief concern is that such advanced capabilities will pose new challenges for <a href="https://doi.org/10.1073/pnas.2208839120" target="_blank" rel="noopener">distinguishing between human-generated content and AI-generated content</a>, as well as pose new types of <a href="https://doi.org/10.1038/d41586-023-00340-6" target="_blank" rel="noopener">algorithmic harms</a>.</p>
<p>The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can <a href="https://doi.org/10.48550/arXiv.2310.00737" target="_blank" rel="noopener">manufacture synthetic identities</a> and orchestrate <a href="https://doi.org/10.48550/arXiv.2305.06972" target="_blank" rel="noopener">large-scale misinformation</a>. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as <a href="https://doi.org/10.1145/3498366.3505816" target="_blank" rel="noopener">information verification, information literacy and serendipity</a> provided by search engines, social media platforms and digital services.</p>
<p>The Federal Trade Commission has warned <a href="https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services" target="_blank" rel="noopener">about fraud, deception, infringements on privacy</a> and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube <a href="https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/" target="_blank" rel="noopener">have instituted policy guidelines</a> for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy &amp; Protection Act.</p>
<p>A new <a href="https://bluntrochester.house.gov/news/documentsingle.aspx?DocumentID=4062" target="_blank" rel="noopener">bipartisan bill</a> introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img data-recalc-dims="1" loading="lazy" decoding="async" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://i0.wp.com/counter.theconversation.com/content/218218/count.gif?resize=1%2C1&#038;ssl=1" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p>Authors: <a href="https://theconversation.com/profiles/anjana-susarla-334987" target="_blank" rel="noopener">Anjana Susarla</a>, Professor of Information Systems, <em><a href="https://theconversation.com/institutions/michigan-state-university-1349" target="_blank" rel="noopener">Michigan State University</a></em>; <a href="https://theconversation.com/profiles/casey-fiesler-1390346" target="_blank" rel="noopener">Casey Fiesler</a>, Associate Professor of Information Science, <em><a href="https://theconversation.com/institutions/university-of-colorado-boulder-733" target="_blank" rel="noopener">University of Colorado Boulder</a></em>, and <a href="https://theconversation.com/profiles/kentaro-toyama-160672" target="_blank" rel="noopener">Kentaro Toyama</a>, Professor of Community Information, <em><a href="https://theconversation.com/institutions/university-of-michigan-1290" target="_blank" rel="noopener">University of Michigan</a></em></p>
<p>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license.</p>
<p>Image Credit: <a href="https://www.freepik.com/free-psd/robot-working-modern-office-with-real-people-generative-ai_47892759.htm#query=robots&amp;position=21&amp;from_view=search&amp;track=sph&amp;uuid=82d6650b-efed-417c-bfe5-81e73be3c2fa" target="_blank" rel="noopener">Image by WangXiNa</a> on Freepik</p>
<p>#ad #commissionsearned</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.akingate.com/ai-is-here-and-everywhere-3-ai-researchers-look-to-the-challenges-ahead-in-2024/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5581</post-id>	</item>
	</channel>
</rss>
