-->

Hello, this is me!

Nur Imroatun Sholihat

Your friend in learning IT audit Digital transformation advocate a-pat-on-your-shoulder storyteller

10 May 2023

My Thoughts on The Recent Artificial Intelligence Development

  • May 10, 2023
  • by Nur Imroatun Sholihat

“AI is going to be one of the defining technologies of our time, and it's really important that we get it right." (Altman, 2023)

Although artificial intelligence (AI) had been around for more than half a century, I noticed that only recently, triggered by ChatGPT’s massive success, the general public started to pay close attention to it. AI has intercepted our daily life long before that (consider recommender systems in online marketplaces or social media platforms) but only in recent years that public conversations do not shy away from it. On my part, I have been very cautious to not give opinions regarding AI until I have enough knowledge. Now I think the conversation around this topic is necessary and timely and I am relatively ready. Therefore, I mustered up my courage to finally write about it.

(Disclaimer: While I have educated myself on the topic, I recognized the possibility that I am unconsciously biased or take less-than-necessary learning. After all, I am just an ordinary tech enthusiast with very limited knowledge so take this post with a grain of salt. I am ready to admit my mistakes on these thoughts if in the future they are proven wrong. I used ChatGPT as an example here not to undervalue thousands of awesome AI products, just as a representative of them.)

These days, Lex Fridman’s interview with Sam Altman (OpenAI CEO) occupied a large chunk of my brain. It is living in my mind rent-free and I have no problem with that. Unless Lex decided to ask me about the rent price, I would not dare to charge him. (LoL my unfunny joke is back!). On a serious note, the interview made me torn apart between wanting AI development to get full support (which sometimes means allowing the development to be highly experimental) and that it should be strictly regulated. I hope AI researchers/practitioners and regulators out there can find the perfect balance of leveraging the maximum possible benefits humanity can generate from AI while upholding the highest ethical principles and the responsibility of creating a better world without disadvantages that outweigh the advantages--including from the most vulnerable people’s perspectives. In a utopian society, I will end my post here because I have done my part of providing a recommendation as mentioned above. But we all know that this complex world of ours does not work that simply. So let me continue.

In recent years, the world is changing so fast right in front of our eyes and AI is one advancement the masses could not take lightly. AI systems have become much more powerful and relatively more reliable, which Sam narrated as “we don’t get mocked that much anymore”. AI used to be severely underestimated but now with recent advancements including GPT, many people even found it potentially “disturbs” mankind. Yes, while AI has created optimism and enthusiasm for many people, the other side of the population is scared and pessimistic about it. Sam himself is both excited and frightened--something that I appreciate because acknowledging both extremes of the benefits and risks is essential especially when the stake is this huge. He furthermore acknowledged that there will never be a completely unbiased version of GPT. What they can do is aim to make it as neutral as possible through RLHF (reinforcement learning from human feedback) and give more control to the people. For that reason, ChatGPT was deployed early to generate human feedback and also give the public more control so that it can be iteratively fine-tuned based on the collective inputs. Another thing I noticed is that GPT4 Technical Report (OpenAI, 2023) listed the possible risks like generating harmful advice or inaccurate information, and what the organization has done to mitigate them.

While definitely, the current AI has not yet “upheld the highest ethical principles and aligned with the best interests of humanity”, we should be a bit relieved that in this critical turning point of human history, people that (seemingly) strive to be balanced and cautious like Sam is in the driving seat. At least, OpenAI people are trying their best to get it right. I want to believe them.

Explainable AI (XAI)

To realize the full potential of AI, it’s important to prioritize alignment with ethical considerations and human values. The concept that I believe is useful to start this journey is explainability or interpretability:  the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level (c3, n.d). Salierno (2023) described them as the ability to see inside what’s described as the “black box” of algorithmic decision-making while Miller (2019) defined them as how easily a human being can interpret and understand how the model arrived at a decision or prediction.

In simple words, XAI allows us to see the works between input and output. This is crucial because only through this transparency we can examine and evaluate AI models to ascertain what is influencing their decisions and identify any potential biases or ethics violations.

What is Next

After that, we shall evaluate the AI processes and models including their privacy and security aspects. The data collection and everything in the data life cycle should be transparently communicated to the data owner. The data behind the models should be governed properly mainly to ensure the quality of decisions made and the security aspects of the data. We should also consider how intellectual property would be negatively impacted by AI. Additionally, potential misuse of AI technologies such as deepfake should be mitigated by for example providing a mechanism to confirm the originality of a file (audio/video).

Other concerns that arose are regarding the accountability and regulatory aspects. A decision, even though generated by AI should be able to be held accountable. Therefore, who is accountable should be defined. In addition, regulatory development will probably always be outpaced by AI development. As a consequence, it is important to determine how can we ensure that AI is aligned with regulatory principles. We shall continuously monitor and evaluate AI processes and models, prioritizing the ones that have huge impacts on mankind. 

Closing

While I definitely cheer upon AI development, I also want it to be heavily regulated. At the minimum, I hope that AI models should uphold ethical principles and consider the best interests of humans and the broader systems. I know this balance is kind of difficult to achieve and I recognized all the hard work the AI people have done to achieve it. I am looking forward to a more robust, ethical, and reliable AI. I am optimistic about it.

--------

Image by rawpixel.com on Freepik

--------

References:

Altman, S. 2023. "Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI" in Lex Fridman Podcast #367, <https://www.youtube.com/watch?v=L_Guz73e6fw&t=6324s>

c3.ai. n.d. Glossary. accessed at 10 May 2023, <https://c3.ai/glossary/machine-learning/explainability/#:~:text=Explainability%20(also%20referred%20to%20as,being%20at%20an%20acceptable%20level.>

Miller, T. 2017. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, Vol. 267, page 1038, https://doi.org/10.1016/j.artint.2018.07.007.

OpenAI. 2023. GPT-4 Technical Report. accessed on 10 May 2023, <https://cdn.openai.com/papers/gpt-4.pdf>

Salierno, D. 2023. Explainable AI pulls back the curtain on machine-made decisions. Internal Auditor Magazine February 2023, a publication of The Institute of Internal Auditors

2 Comments:

  1. Nice writing. At the end, as far as I'm concerned, AI takes place as a tool to help the users in making decisions and not to define what users must do. It lies in our own hand either to make it useful or dangerous.

    ReplyDelete
    Replies
    1. Thank you. I concur with your perspective. AI is indeed a powerful tool that may help people make decisions by offering insightful information and analysis. It should never, however, take precedence over the autonomy of humans. In the end, it is our responsibility to decide how we use AI and make sure it stays helpful rather than potentially harmful. My point is, as mentioned in the post, that accountability should be established. Even though the machine will be used to help, we shall regulate that humans could not avoid the responsibility. Once again, thank you for taking the time to read and comment :)

      Delete

Videos

Jakarta, Indonesia

SEND ME A MESSAGE