View by Author

Most Recent Articles

AI cake

Cakes Can’t Be Unbaked: Why You Should Think Twice About AI

By Published On: 22 June 2023Categories: Security, Tech & Data

There has been a flurry of recent announcements on the need to manage and regulate AI by people in government and leaders in the AI field. Find out why you should not wait for regulation and how we can ensure the protection of our privacy and assets at the individual, organisational and community level.

Share This Article:

Discussions of AI regulation are progressing in the EU, China, Japan, India, Canada, the US and now Australia. Despite the establishment of AI Risk Management Frameworks and Standards by the likes of NIST and ISO, over 1,100 signatories including prominent technology and AI leaders and personalities have called for a ‘pause’ to AI development to enable a “level of planning and management”. There certainly is abundant warning and publicity from a macro level given to the potential impact of AI for humanity if not appropriately and globally approached and managed. But what does this mean when we look at the individual, organisational and community levels? We are all intrinsically linked to this problem and our ability to understand how AI impacts us will enable a greater capability in making informed decisions and choices and protecting ourselves from things outside our control.

For this context, when talking about AI and our assets, we’re referring to information. That is information about you, information that defines you and information that frankly may be no one else’s business but your own. This applies equally to the organisation you’re in. While the protection of information and care in releasing is nothing new, what’s different with AI is how ‘baked in’ information can be, meaning that awareness around what, where and whom information goes to will become much more critical.

In a traditional sense, when this information is provided to a technology system for the purposes of providing a function (say providing your name to log onto a library system to borrow a book) there is an exchange and input of your information. This information is then ‘filed’ away in the ‘cabinet’. It is rarely altered and if it is, it is by distinct algorithms and can be undone. If in some instance you want to ‘disappear’, it is possible for your information to be removed from the cabinet, including any subsequent information that was ‘calculated’ from your original data.

The difference with AI, and this is to use AI in a very general sense to encompass the many different technologies the term ‘AI’ in common language can cover, is we are no longer putting the information into a ‘filing cabinet’. AI that is generative, learning, or similar, processes incoming knowledge to alter itself – to learn. If the information provided is imagined as ingredients in a giant cake and the AI as the process of mixing and baking the cake, once your information is added there is no way of completely extracting it. As much as you think you can pick out the frozen berries from the mix, there will be residue. Worse if the AI ‘blends’ your information berries into the cake batter or if the cake is already baked.

The cake can’t be unbaked. That is why regardless of the existence (or not) of regulation and global management, it is imperative that we integrate AI awareness into our ‘cyber-safe’ messages at the individual, and corporate/organisational levels, just like we understand the importance of looking both ways before crossing the road.

Individual

At an individual level, the information you directly provide to AI is incorporated and blended into its algorithms and essentially its ‘being’. Unless you had a deep understanding of the AI, it would be difficult to know for sure exactly how your information is being dissected and used. Is it being used to help profile a particular group that is already marginalised? Is it inadvertently adding or countering inherent bias? Is your information being aggregated with other people’s information to allow analysis or reveal information previously kept secret (sometimes for good reason such as hiding protected identities of victims of crime)? As discussed above, once your information is in the AI system, it can’t be extracted – it is no longer in your control and you can’t ask for it to be removed.

Here it is important to note that you may not be the one directly putting your information into an AI system. Anyone who has knowledge of you and is choosing to use AI to enter information may be inadvertently releasing your information into the unknown. From companies that are using AI to assist in increasing efficiencies for advertising, or those that are using online AI (such as ChatGPT) to help generate text for letters or notes. Even if the information is obfuscated or ‘masked’, with the processing and sheer volume of data going into some of these AI systems (especially those available online) it is not impossible to have AI reconstruct the original data.

Essentially, from an individual level, a greater awareness of the implications of AI to privacy is critical. From AI’s ability to reverse engineer or deduce information gathered, to the inability to ask for the removal of your information. And that is not just from the individual disclosing the information, but ensuring that when they provide it to companies, organisations and people using AI systems, that they obtain proper consent before using our information.

Organisational

From an organisational perspective, the awareness of AI is also critical in the protection of company secrets and sensitive information. Take the case study of Samsung from earlier this year with the accidental data leakage of sensitive information. Staff had been using ChatGPT to assist in their work and had uploaded sensitive source code. Once the information is thrown into the cake batter, Samsung has learnt that the cake can’t be unbaked. In fact, this can compound if the AI happens to be one that learns from input information, especially competitively sensitive information, and then incorporates this into its algorithms for outputting or generating responses for the next queries – which potentially exposes it to competitors.

Organisations need to be aware of how to balance harnessing the power of AI with the risks to their organisation’s assets, including information and reputation. For example, the use of an in-house AI system with no external data flows might protect sensitive information but reduce the ability of the AI to incorporate knowledge from outside sources, thereby increasing the risk of bias and reducing the effectiveness of AI to learn. Conversely, relying entirely on an open AI can lead to the leakage of sensitive information. Even when information is ‘thought’ to be sanitised, AI systems may still be able to reconstruct sensitive corporate information.

Organisations also must be aware that by incorporating AI into their processes and systems, they may also be inadvertently ‘baking’ in a dependency on the AI itself. This means that if the AI ceases to function unexpectedly, it may be so embedded and ‘baked into’ the organisation’s processes its failure becomes catastrophic. This can be managed through regular check-ins across the AI lifecycle and delving deeper into the risks of AI end-of-life both planned and unplanned (stay tuned for an upcoming blog on this).

So what now?

This does not mean that AI shouldn’t be used or information should never be disclosed. What it does mean is that when we input our information ‘ingredients’ we do so with knowledge of the AI system and its capabilities.

For example, the use of AI to analyse, process and learn from medical information for the purposes of discovering new diagnostic methods and patterns is a valuable. However, the selection of information that is used for this purpose and the sharing of the information and outcomes must be appropriately managed. By that it means each piece of information in, and each piece of information shared, must have been done so by an active conscious decision made by a human with the appropriate authority for that information and the AI system’s outcome. This goes a large part of the way in protecting information and the oversight of bias that may be a product of the AI.

The overarching message is that of awareness. AI systems lack the capability of being naturally bound by ethics and cultural norms. The calls from national regulators and leaders in the field are in recognition of this deficiency in AI – that it inherently does not know ‘good’ and ‘bad’ the same way that humans do. Unlike traditional technologies where this can be deduced during design or retrospectively on review, AI can be too complex for this to occur. On the other hand, AI technology can empower and enhance what organisations are able to do. The key to doing this safely and appropriately is with awareness and human involvement throughout the process.

At Anchoram we are well placed in assisting and advising on the capabilities and appropriate management when using AI, AI-like and automation technologies. Our teams are comprised of talented and experienced professionals with experience spanning industries of Government, Critical Infrastructure, Education, Health, Resources and Utilities. With our collective skills and expertise, we bring decades of diverse experience to contextualise and best advise on AI opportunities and risks and how they apply to your specific organisation.

There has been a flurry of recent announcements on the need to manage and regulate AI by people in government and leaders in the AI field. Find out why you should not wait for regulation and how we can ensure the protection of our privacy and assets at the individual, organisational and community level.

Karen Geappen
By Published On: 22 June 2023Categories: Security, Tech & Data

Share This Article:

Categories

Subscribe

Subscribe to our newsletter and get the latest news and information from Anchoram.

View by Author

Most Recent Articles

Author Profiles