Thursday, September 26, 2024

Artificial Intelligence

 

"Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it" – This is the maxim on which AI is build. Machines are made to behave intelligently. It can read and grasp reams of data; it can determine patterns and spot outliers. Unlike automation, it learns from mistakes, and like human beings, with more practice, it becomes better. AI is meant to free up time for people but can never dispense with the need for human experience and insight. AI is helping industries like financial services, healthcare,  automotive and many others, accelerate innovation, improve customer experience, and reduce costs.

Bots are set to replace tax preparers, Online shopping is making the sales rep extinct, Self checkout reduces the need for cashiers, Robots are replacing medical technicians, lawyers are replaced with bots, BPO can become machine driven.

The AI revolution is in full swing, with many monumental achievements like the revolution in healthcare, where we had a doctor in China doing a remote surgery from his home town, timely diagonises and treatment are helping patient care, Chat GPT  a generative artificial intelligence chatbot developed by OpenAI. Launched in 2022 based on the GPT-3.5 large language model, it provides answers to any question you ask, unlike google which provide multiple options and you have to choose.

 The first self-driving car - In 1995, Mercedes-Benz managed to drive a modified S-Class mostly autonomously from Munich to Copenhagen.

According to auto evolution, the 1043 mile ride was made via stuffing effectively a supercomputer into the boot - the automobile contained 60 transputer chips, which at the time were the kingdom of the artwork when it came to parallel computing, that means that it may want to system a lot of riding statistics shortly - a crucial section of making self-driving motors sufficiently responsive. The vehicle reached speeds of up to 115mph and was virtually pretty similar to autonomous automobiles of today, as it could overtake and read road signs.

But when and how did it start? Any guess?

The concept of AI didn't suddenly appear - it is the subject of a deep, philosophical debate that still rages today: Can a machine honestly think like a human? Can a machine be human? One of the first people to think about this was René Descartes, way back in 1637, in a book called Discourse on the Method.

The second primary philosophical benchmark came courtesy of computer science pioneer Alan Turing. In 1950 he first described what became known as The Turing Test, and what he referred to as "The Imitation Game" - a test for measuring when we can finally declare that machines can be intelligent.

 His test was simple: if a judge cannot differentiate between a human and a machine (say, through a text-only interaction with both), can the machine trick the judge into thinking that they are the human one?

 “Neural Network” is the fancy name scientists give to trial and error, the critical thinking unpinning present-day AI. Essentially, when it comes to coaching an AI, the first-class way to do it is to have the device guess, acquire feedback, and bet again - continuously moving the possibilities that it will get to the correct answer. What's quite splendid then is that the first neural community was once definitely created way again in 1951. Called "SNARC" - the Stochastic Neural Analogy Reinforcement Computer - was created by Marvin Minsky and Dean Edmonds. It was not made of microchips and transistors, however of vacuum tubes, motors, and clutches.

In 1997, IBM was responsible for perhaps the most famous chess match of all time, as its Deep Blue computer bested world chess champion, Garry Kasparov - demonstrating how powerful machines can be.

 To a positive extent, Deep Blue’s Genius was illusory - IBM itself reckons that its computing device is not using Artificial Intelligence. Instead, Deep Blue uses a combination of brute pressure processing - processing thousands of possible moves every second. IBM fed the system with facts on lots of beforehand games, and each time the board modified with each movie, Deep Blue wouldn’t be gaining knowledge of anything new. Still, it would as a substitute be looking up how preceding grandmasters reacted in identical situations. “He’s playing the ghosts of grandmaster's past,” as IBM puts it.Whether this counts as AI or no longer, though, what’s clear is that it was once indeed a substantial milestone and one that drew much interest not simply to the computational skills of computers but additionally to the discipline as a whole. Since the face-off with Kasparov, besting human players at games had come to be a significant, populist way of benchmarking computer Genius - as we saw once more in 2011 when IBM’s Watson machine handily trounced two of the game show Jeopardy’s fantastic players.

 Machine Starts Talking - Siri

Natural language processing has long been a holy grail of synthetic intelligence - and integral if we’re ever going to have a world where humanoid robots exist or where we can bark orders at our units like in Star Trek.

 2010S: WATSON AND OUR DAYS

In the early 2000s, the story of the voice revolution reached a decisive turning point: the question answering system, Watson competed with the best champions of the popular television quiz Jeopardy! and defeated them in total points. Thus, becoming the first system capable of processing natural language with the same speed and confidence as a human.

 This victory set the stage for a forthcoming set of digital smart products that you can control with your voice. Two months after Watson's success, Apple introduced Siri to the world, then conversational assistants began to pop up like mushrooms after the rain (2012: Google Assistant, 2013: Cortana, 2014: Amazon Alexa, 2016:Google Home, 2017: Bixby etc.).

 And this is why Siri, which used to be constructed using the aforementioned statistical methods, was once so impressive. Created by using SRI International and even launched as a separate app on the iOS app store, it was rapidly acquired using Apple itself and deeply integrated into iOS: Today, it is one of the most excessive-profile fruits of computer learning, as it, along with equivalent merchandise from Google (the Assistant), Microsoft (Cortana), and of course, Amazon’s Alexa, has modified the way we have interaction with our units in a way that would have appeared impossible simply a few years earlier.

 Today we take it for granted - however, you only have to ask all people who ever tried to use a voice to textual content software before 2010 to respect just how far we’ve come.

 Like voice recognition, picture awareness is every other most crucial assignment that AI is helping to beat. In 2015, researchers concluded for the first time that machines - in this case, two competing structures from Google and Microsoft - have been better at identifying objects in pictures than humans were, in over one thousand categories. These “deep learning” systems were successful in beating the ImageNet Challenge - assume something like the Turing Test, however, for image attention - and they are going to be essential if photograph cognizance is ever going to scale beyond human abilities.

 GPUs make AI economical.

One of the big reasons AI is now such a big deal is because it is only over the last few years that the cost of crunching so much data has become affordable. According to Fortune, it was only in the late 2000s that researchers realized that graphical processing units (GPUs), which had been developed for 3D graphics and games, were 20-50 times better at deep learning computation than traditional CPUs.

 AlphaGo and AlphaGoZero conquer all.

In March 2016, another AI milestone was reached as Google’s AlphaGo software beat Lee Sedol, a top-ranked player of the board game Go, echoing Garry Kasparov’s historic match. What made it substantial was not simply that Go is an even different mathematically complex sport than Chess; however, that it was skilled using a combination of human and AI opponents. Google received 4 out of five of the matches via reportedly using 1920 CPUs and 280 GPUs.

 Perhaps even extra giant is information from a remaining year - when a later version of the software, AlphaGo Zero. Instead of the usage of any previous data, as AlphaGo and Deep Blue had, to research the sport, it undoubtedly played hundreds of matches towards itself - and after three days of coaching, was capable of beating the version of AlphaGo which beat Lee Sedol one hundred video games to nil.

 Apple, Microsoft, Alphabet, are all forrunner in AI innovations. These technologies not only save time, but also potentially save lives by minimizing human error and ensuring a safer working environment. In addition, automating repetitive tasks in design, planning, and management with AI frees up human workers to focus on more complex and creative aspects.

Artificial Intelligence, technical automation, and bots are transforming the workplace culture. Technologies, however, are yet to master Emotional Quotient, and this is where soft skills like decision-making and empathy of a deserving candidate are crucial.

AI bias can creep in when decisions made by AI reflect the conscious or unconscious values of the people who designed it or data it's based on, for example, when finance teams make decisions on customers' credit or payment terms. Applying AI to F&A creates new demands for teams with both business and technical skills. People need industry and functional knowledge to provide essential context and review algorithms. Advanced teams are even hiring behavioral scientists and anthropologists. But they also need technical skills, such as forecasting, data scientists, and engineers, analytics, design thinking, and agile programming. Once you have the right people, they need the right infrastructure to work with. With easy access to intuitive technology at home, a workplace with outdated, clunky systems won't encourage them to stay.

It will kill some jobs, leave some untouched and create new ones as well, Jobs, that are likely to go away due to automation include  call center employees, data entry operators, insurance underwriters, tax preparers, sales representatives, translators, and fast food employees.

Yet, no advancement can upstage psychiatrists, storytellers, world-class teachers, scientists, actors, and thought leaders because these roles need innovative and personal skills.

The World Bank estimates up to 69% of today’s job positions will become redundant. But there is no need to panic. For every job lost, new ones will come up. Look at history, for proof. The 20th century hadn’t heard anything like Chief Technology Officer, Chief Delivery Officer, Chief Belief Officer, and Chief Gardener. It is not that job opportunities are not there. It is just that skills set requirements have changed. So what is most important is to ensure that workforce is smart and adaptive and can take up newer roles.

Like every other the entertainment industry has been debating both the pros (such as the rise of new art forms) and cons (deepfakes that can replicate a performer’s face and/or voice, with or without their permission) of the proliferation of AI.

While AI is a powerful resource that’s not going away, industries, governments and the public at large need to stay updated on its developments and think carefully about the ethical implications of its use.

 The question persist: "If ethical principles deny our right from doing evil, that good may come, are we justified in doing good, that the foreseeable consequence is evil?"

Some of the effects of AI to be checked are:

1. Phishing Messages And Malware

2. Identity theft: AI-generated deepfakes aren’t just targeting high-profile people. Fraudsters are leveraging them to steal individuals’ identities so they have access to bank accounts and confidential information. Luckily, verification platforms that have multiple identification factors can help deter fraud and the potential leakage of personal information and documents.

3. Increasingly Sophisticated Cyberattacks - Hackers are increasingly utilizing AI for sophisticated cyberattacks.

4. Disinformation Campaigns

AI-generated text can be used to create sophisticated disinformation campaigns. By emulating the writing style of influential figures, AI can generate fake news articles, social media posts or blog entries that appear authentic. This raises concerns about the spread of misinformation and the erosion of trust in online content.

5. Revelation Of Personal Data

AI models trained on large data sets can capture patterns and knowledge from text, potentially including sensitive or personal information. This raises concerns about the privacy and security of individuals’ data, as AI-generated text can inadvertently reveal private details or be exploited for malicious purposes, such as social engineering attacks or identity theft.

6. Reputational Damage

It’s unsettling that deepfake technology could enable highly damaging revenge scenarios. A vengeful person could easily make it appear as though someone has cheated by swapping faces in an intimate video; create a fake video of the victim saying offensive things, damaging their career (even if the video is proven to be fake); or blackmail someone with a deepfake video, threatening to release it publicly unless demands are met.

7. Impersonating Trusted Individuals

Deepfakes are on the rise and create security threats for both consumers and businesses. Bad actors can utilize AI to impersonate bank employees or even family members over the phone. These phishing attacks are very dangerous—their urgent and deceptive nature specifically targets human emotions with the ultimate goal of stealing personally identifiable information and/or money.

8. Manipulating Election Results

AI deepfakes can distort democratic discourse and manipulate elections. Deepfakes can be used to spread misinformation, propaganda and fake news about political candidates, parties or issues. Political leaders can be impersonated or discredited, as can political activists or journalists. This can influence voter behavior, undermine public trust and destabilize democracy. AI use needs to be controlled.

9. Autonomous Weapons Systems

I am sure there will be a time when AI-powered autonomous weapons systems will evolve. These systems could have the potential to make critical decisions about targeting and engagement without direct human control. This raises serious ethical concerns.

10. Image Manipulation

Most people do not realize that AI can be used to manipulate images. AI-powered image manipulation can take an existing image and change elements of it, such as the background, color and other features. This technology is used for everything from facial recognition to creating realistic deepfakes. It is a powerful tool that can be used both ethically and unethically, depending on the application.

11. Surveillance

One unsettling way AI can be leveraged is as a surveillance tool. Facial recognition technology is becoming more common, and there’s a concern among some that it may be used to keep an eye on people without their knowledge. I think we need to be cautious and hold companies that use this tech accountable so people’s rights are not violated.

12. Adversarial Attacks

AI adversarial attacks represent a surprising and concerning application of the technology. These attacks subtly manipulate AI inputs to induce erroneous outputs, misleading systems including those used in autonomous cars or for facial recognition. This unfamiliar threat can lead to significant security risks, making it vital to improve public awareness and system resilience.

13. More Pervasive And Invasive Advertising

AI can be used for more pervasive advertising. With AI, one can analyze the emotional state of a consumer and feed them highly personalized ads, exploiting their emotional vulnerabilities. AI algorithms can distinguish between happy and sad faces, understand text sentiments and tone of voice, and read other behavioral patterns to manipulate a user’s decision-making processes and nudge them into buying.

14. Creation Of Echo Chambers

The most unsettling development to me is the way AI serves up only what people want to see and know about. The more you click on sites and pages expressing a certain viewpoint, the more that viewpoint is shown to you. It is causing people to take sides and think those who don’t believe the same things they do are misinformed, unintelligent or misguided. In reality, every one of us is only being shown things that align with our existing viewpoints.

 

15. Realistic Digital Influencers

Companies are creating AI-generated social media influencers that are entirely computer-generated and designed to appear and act like real people. They can amass large numbers of followers, endorse products and even collaborate with other influencers—all without being human. These blurred lines between real and virtual individuals raise ethical concerns regarding transparency and authenticity in influencer marketing.

16. Creation Of Synthetic Data

One way AI is being leveraged that the general public may not know about is to create synthetic data, which imitates real data such as images, text, audio or video. Synthetic data serves several worthwhile purposes, including training machine learning models, testing software and enhancing privacy. However, there are also challenges regarding quality, validity, fairness and safeguarding the rights of the original data owners and users.

17. Medical Image Interpretation

AI’s ability to interpret medical images, such as X-rays or MRIs, is astonishing yet disconcerting. While it can aid in early disease detection, if the algorithms are flawed or biased, it may lead to misdiagnoses and inappropriate treatments. It’s essential that we approach AI in healthcare with a balanced understanding of both its vast potential and the need for rigorous validation.

 



No, it is not a mosquito. It's an insect spy drone which can be remotely controlled and is equipped with a camera and a microphone.     It can land on you, and it may have the potential to take a DNA sample or leave RFID tracking nanotechnology on your skin. It can fly through an open window, or it can attach to your clothing until you take it in your home.    One of the current areas of research reportedly being undertaken in the scientific/military field is the development of micro air vehicles (MAVs), tiny flying objects intended to go places that cannot be (safely) reached by humans or other types of equipment. Pl zoom & see actual equipment.   This is fifth generation war.

No comments: