< Back

The need for a Testing Code of Ethics.

Author

Bart Vanherck

Date

01/06/2021

Share this article

Artificial Intelligence appears to be the solution for many problems. Chatbots, for example, act like real people with the help of Artificial Intelligence and these chatbots are already very advanced. It is already hard to determine if the person you are chatting with is an AI or a real person. And while AI definitely has a positive effect on the world, it also contains dangers that can’t be ignored. To continue with the example of a chatbot, some dating sites use these. The bots try to convince the customers, mostly focused on the male customers, to start a paid subscription on the site. 

Many people believe that they are talking to a real person, while in reality, it is nothing but software. Using AI in this fashion is not ethical and is rather scary. 

Facebook has, in the past, used their users as subjects for some experiments, without their knowledge. The researchers wanted to see if seeing more positive or more negative news in their feed had an effect on their happiness. Would a person appear happier when seeing a lot of good news and would a person appear unhappy when seeing a lot of bad news? 

In order to test this, Facebook manipulated the newsfeed by filtering bad or good news. The result was as expected. People post more negative messages when they were seeing more negative news and the other way around. 

Yet these users were not informed that they were part of some experiment. Is that ethical? 

Another example would be the following experiment. 

Researchers wanted to know if they can cause more people to vote in the next election. They created an ad, which just told people to go voting. In this case, it seems like a noble thing to do, because having more people that vote is good for democracy. But this too can be seen as manipulation. Having more people vote, might, of course, change the results of the election. A candidate might now win when he would have lost before. 

Is this experiment ethical?   

Bank and insurance companies use credit rating to determine if a user is capable of paying their loans back. It is unlikely that you get a loan if your rating is low on that scale and even if you still do get a loan, you probably have to pay more interest. On a scale like that, you obviously want higher ratings. 

Facebook has also developed such a rating. In their scale, it’s your social network that determines the rating. It sounds interesting, but is it? 

Suppose you have moved to Africa, a few years ago, to help the local people there, but now you wish to return to your hometown. You decide to buy a new house and so you go to the local bank. If the bank would be using their original credit rating scale, you would have a low rating. After all, you do not have any income yet. With the rating of Facebook, the principle is different. The local bank can now also see if your friends have good jobs. This could cause you to have a higher rating. Is this positive news or is it not? 

Suppose now that you are a hard-working person who does not earn that much money. You have many unemployed friends on social media. That same algorithm that helped you get a higher rating before, now concludes that you are no good and gives you a low rating. In this scenario, you are suddenly worse off, due to no fault of your own. 

Looking at these examples, it’s easy to determine for yourself whether it’s ethical or not. But what would you do if you actually have to test such a system? Do you report a bug if you find a problem with the system when it’s not a functional problem, but an ethical one? Should ethical testing be part of the non-functional requirements? These requirements need to be tested too. 

Systems that use artificial intelligence and big data are still very young and already there are many examples of unethical uses. 

I believe that we should create a Testing Code of Ethics. The following rules are already in my Code of Ethics: 

  • The system should not harm the users or society. Both physical and mental harm is not allowed. 

  • The public good is a central concern. 

  • The system is not allowed to discriminate in any way. 

Are there other ethical rules that you want to test for? I’d love to hear from you.