Artificial intelligence (AI) is technology’s latest streaking comet: A rapidly advancing and spectacular novelty promising more intelligence, automation and ease of life. Yet, are we enabling a technology that will one day overpower humans and command life as we know it?
Specifically, the arrival of AI-based tools such as ChatGPT and DALL-E have dazzled us with their dynamic capabilities as well as unnerved us with their staggering potential. The current debate on AI is hinged on broad philosophical questions and the public’s response. What do people make of all this?
But as these tools are increasingly embraced by corporations and organizations seeking to spark innovation, improve productivity and drive profits, a new question is raised: What is the industry’s responsibility to the public in introducing AI to our daily life?
ChatGPT: AI darling or danger?
In an astonishingly short period of time, AI has advanced from a crawl to an all-out sprint. No piece of tech embodies that acceleration more completely than ChatGPT. The average person probably doesn’t understand the level of change this tool has introduced. In a nutshell, ChatGPT collects knowledge from around the internet, aggregating information and building answers to virtually any question or problem.
On the surface, this sounds less than revolutionary, but ChatGPT has the extraordinary ability to answer user questions with an informed, coherent, well-reasoned answer — and almost instantly. It can also follow complex instructions in natural language and solve difficult problems with statistical accuracy. Some say this is an amazing “breakthrough” that represents thinking, rationalization and decision-making outside of the human mind.
Two considerations to raise before casting your vote: First, the internet isn’t always accurate. GIGO, a popular acronym in AI circles, which means “garbage in, garbage out.” If the knowledge you collect and add up is based on garbage, your answer will be garbage. The ChatGPT model is designed to collect data from the internet to yield an answer that, based on statistical probabilities, is the most accurate. But if “most accurate” isn’t actually accurate, then a user’s answer is still, essentially, garbage.
In an astonishingly short period of time, AI has advanced from a crawl to an all-out sprint. No piece of tech embodies that acceleration more completely than ChatGPT. The average person probably doesn’t understand the level of change this tool has introduced. In a nutshell, ChatGPT collects knowledge from around the internet, aggregating information and building answers to virtually any question or problem.
On the surface, this sounds less than revolutionary, but ChatGPT has the extraordinary ability to answer user questions with an informed, coherent, well-reasoned answer — and almost instantly. It can also follow complex instructions in natural language and solve difficult problems with statistical accuracy. Some say this is an amazing “breakthrough” that represents thinking, rationalization and decision-making outside of the human mind.
Two considerations to raise before casting your vote: First, the internet isn’t always accurate. GIGO, a popular acronym in AI circles, which means “garbage in, garbage out.” If the knowledge you collect and add up is based on garbage, your answer will be garbage. The ChatGPT model is designed to collect data from the internet to yield an answer that, based on statistical probabilities, is the most accurate. But if “most accurate” isn’t actually accurate, then a user’s answer is still, essentially, garbage.
Will ChatGPT replace search?
So what’s the harm? Let’s play out the scenario of shifting from the widespread use of search engines to aggregating tools such as ChatGPT, which has already begun in earnest. If AI tools are aggregating information, users don’t see the original content page, which means they also don’t see any ads. If they don’t see ads, eventually, real content will become scarce. The world of SEO will disappear, as the only information available for aggregation will be that generated by parties of interest. Funded organizations will create content based on their own goals, and the open internet as we know it today will no longer exist. There simply will be no incentive to write objective content.
The internet is the essence of freedom of speech. With easy access and no controls around who and what you can post, search engines are a sea of opinions and bias. In most cases, information found on the web has not been evaluated for authority, reliability, objectivity, accuracy and currency. In early 2020, particularly with the onset of the pandemic, the internet has been a hosting vehicle for misinformation or “fake news,” some intentional and some simply due to a lack of credible sources.
Nonetheless, ChatGPT, along with an array of other AI data aggregation tools, marks the latest leap in the ongoing evolution of AI. Without more controls to ensure content posted to the internet is objective, validated, accurate and current, the value propositions of these types of tools is questionable.
So what’s the harm? Let’s play out the scenario of shifting from the widespread use of search engines to aggregating tools such as ChatGPT, which has already begun in earnest. If AI tools are aggregating information, users don’t see the original content page, which means they also don’t see any ads. If they don’t see ads, eventually, real content will become scarce. The world of SEO will disappear, as the only information available for aggregation will be that generated by parties of interest. Funded organizations will create content based on their own goals, and the open internet as we know it today will no longer exist. There simply will be no incentive to write objective content.
The internet is the essence of freedom of speech. With easy access and no controls around who and what you can post, search engines are a sea of opinions and bias. In most cases, information found on the web has not been evaluated for authority, reliability, objectivity, accuracy and currency. In early 2020, particularly with the onset of the pandemic, the internet has been a hosting vehicle for misinformation or “fake news,” some intentional and some simply due to a lack of credible sources.
Nonetheless, ChatGPT, along with an array of other AI data aggregation tools, marks the latest leap in the ongoing evolution of AI. Without more controls to ensure content posted to the internet is objective, validated, accurate and current, the value propositions of these types of tools is questionable.
Credible AI advances deserve support
The story of AI isn’t strictly a cautionary tale.
The technology is already being used by companies in areas ranging from enhanced safety and security to improved shopping experiences. And its long-term potential has far more significant implications.
AI’s ability to consume huge amounts of information that mankind simply can’t grasp is wildly promising. Take healthcare, for instance. A few years ago, at one of my startups, we found that doctors were following nearly 50 diagnostic protocols per case — a huge drain on time and resources because humans can’t process that amount of information (which is, by the way, constantly growing).
Consequently, we created a decision support system that does the processing for doctors. If you can harness AI tools to summarize, make recommendations and suggest proper procedures in those situations, they could dramatically improve the healthcare industry’s reach and effectiveness.
The aggregation and use of several types of data can easily be verified through human sources and other platforms. ChatGPT, for instance, has many practical uses and reduces, if not eliminates, several types of research work that is resource and time-intensive. Furthermore, competencies will surely be developed to address this uncharted area of the AI realm, in which the ability to ask the right questions of aggregating software will become a profession of its own.
The story of AI isn’t strictly a cautionary tale.
The technology is already being used by companies in areas ranging from enhanced safety and security to improved shopping experiences. And its long-term potential has far more significant implications.
AI’s ability to consume huge amounts of information that mankind simply can’t grasp is wildly promising. Take healthcare, for instance. A few years ago, at one of my startups, we found that doctors were following nearly 50 diagnostic protocols per case — a huge drain on time and resources because humans can’t process that amount of information (which is, by the way, constantly growing).
Consequently, we created a decision support system that does the processing for doctors. If you can harness AI tools to summarize, make recommendations and suggest proper procedures in those situations, they could dramatically improve the healthcare industry’s reach and effectiveness.
The aggregation and use of several types of data can easily be verified through human sources and other platforms. ChatGPT, for instance, has many practical uses and reduces, if not eliminates, several types of research work that is resource and time-intensive. Furthermore, competencies will surely be developed to address this uncharted area of the AI realm, in which the ability to ask the right questions of aggregating software will become a profession of its own.
What is the industry’s responsibility for AI innovation?
While there are many companies with altruistic intentions, the reality is that most organizations are beholden to stakeholders whose chief interests are profit and growth. If AI tools help achieve those objectives, some companies will undoubtedly be indifferent to their downstream consequences, negative or otherwise.
Therefore, addressing corporate accountability around AI will likely start outside the industry in the form of regulation. Currently, corporate regulation is pretty straightforward. Discrimination, for instance, is unlawful and definable. We can make clean judgments about matters of discrimination because we understand the difference between male and female, or a person’s origin or disability. But AI presents a new wrinkle. How do you define these things in a world of virtual knowledge? How can you control it?
Additionally, a serious evaluation of what a company is deploying is necessary. What kind of technology is being used? Is it critical to the public? How might it affect others? Consider airport security. As a public citizen, what is the level of privacy that you’re willing to sacrifice to achieve a higher degree of safety and security?
That’s a question between vendors, users and lawmakers. In some locales, for instance, facial recognition technology isn’t allowed. But what if airports were provided a very specific list of known terrorists that this technology could instantly identify? At the end of the day, responsibility and deployment decisions are shared between vendors and end users.
While there are many companies with altruistic intentions, the reality is that most organizations are beholden to stakeholders whose chief interests are profit and growth. If AI tools help achieve those objectives, some companies will undoubtedly be indifferent to their downstream consequences, negative or otherwise.
Therefore, addressing corporate accountability around AI will likely start outside the industry in the form of regulation. Currently, corporate regulation is pretty straightforward. Discrimination, for instance, is unlawful and definable. We can make clean judgments about matters of discrimination because we understand the difference between male and female, or a person’s origin or disability. But AI presents a new wrinkle. How do you define these things in a world of virtual knowledge? How can you control it?
Additionally, a serious evaluation of what a company is deploying is necessary. What kind of technology is being used? Is it critical to the public? How might it affect others? Consider airport security. As a public citizen, what is the level of privacy that you’re willing to sacrifice to achieve a higher degree of safety and security?
That’s a question between vendors, users and lawmakers. In some locales, for instance, facial recognition technology isn’t allowed. But what if airports were provided a very specific list of known terrorists that this technology could instantly identify? At the end of the day, responsibility and deployment decisions are shared between vendors and end users.
An idealistic approach to AI
The whole world of AI is just now awakening. We see it every day in all that we do, from the facial recognition security on our iPhones to the viewing recommendations we receive from Netflix. But much of what drives progress in AI is the financial incentive.
I’m passionate about AI and my company’s contributions because I believe our work translates into a positive impact. But I’m not naïve: Not everyone working with these technologies feels the same way. And that’s all the more reason that we should tirelessly strive to put AI regulation and deployment in the hands of responsible, trustworthy people.
AI holds the power to dramatically change the world — for better or worse. Wielding that power isn’t the responsibility of the industry alone. In the same way that companies have begun to define and make their organizational values regarding climate change, human rights and similar issues public, they will need to evaluate (and constantly re-evaluate) their stance on the purpose and usage of AI. If the industry fails to hold itself to account in that regard, the public — as it has shown time and again — most certainly will.
The whole world of AI is just now awakening. We see it every day in all that we do, from the facial recognition security on our iPhones to the viewing recommendations we receive from Netflix. But much of what drives progress in AI is the financial incentive.
I’m passionate about AI and my company’s contributions because I believe our work translates into a positive impact. But I’m not naïve: Not everyone working with these technologies feels the same way. And that’s all the more reason that we should tirelessly strive to put AI regulation and deployment in the hands of responsible, trustworthy people.
AI holds the power to dramatically change the world — for better or worse. Wielding that power isn’t the responsibility of the industry alone. In the same way that companies have begun to define and make their organizational values regarding climate change, human rights and similar issues public, they will need to evaluate (and constantly re-evaluate) their stance on the purpose and usage of AI. If the industry fails to hold itself to account in that regard, the public — as it has shown time and again — most certainly will.
website Visit : https://computerapp.sfconferences.com/
Online Nomination : https://x-i.me/conimr15q
#gamificationofapps #acquiring #datascience #meta #artificialintelligence #machinelearning #internetofthings #computer #technology #pc #tech #gaming #laptop #computerscience #programming #software #coding #pcgaming #windows #computers #gamer #programmer #business #apple #python #developer #code #java #gamingpc #coder
No comments:
Post a Comment