Digital Dangers, and what they mean for Myanmar - News
You are here Home » News » Digital Dangers, and what they mean for Myanmar

Digital Dangers, and what they mean for Myanmar

Digital Dangers are the warning signs that companies and governments must heed when governments interact with the ICT sector or use ICT products and services furnished by companies, to avoid human rights abuses.
Digital Dangers are the warning signs that companies and governments must heed when governments interact with the ICT sector or use ICT products and services furnished by companies, to avoid human rights abuses.

Salil Tripathi, Senior Advisor, Global Issues at the Institute of Human Rights and Business, one of MCRB’s two founder organisations, visited Myanmar in November 2014. He spoke at Pansoedan Space  in Yangon at an event co-organised with PEN Myanmar on ‘Digital Dangers’, the risks ICT companies face when, intended or not, they act in ways that lead to human rights abuses, sometimes due to their actions, sometimes complying with government requests or orders.

Salil talked about how digital dangers relate to the right to freedom of expression, and the implications for Myanmar.  MCRB is conducting a sector-wide impact assessment on the information and communications technology sector, which will research the positive and negative human rights impacts of the ICT sector in Myanmar.

This is a summary of Salil’s remarks.

“Freedom of expression is an enabling right. When the right is exercised, it enables the realization of many other rights. It is a right in itself. But the right also makes it possible for us to exercise and enjoy other rights.

Freedom of expression makes it possible to get their views heard, to seek information they need, to offer alternative points of view, and to communicate freely with other people. When other rights are violated or denied, such as political participation, access to health, food or housing, or water, it is through freedom of expression that the aggrieved parties are able to express their views, criticisms, and demands, to assert their rights. Some legal scholars, therefore, consider it a right as important as the right to life.

Freedom of expression is not a modern innovation; it is an old right, dating back to the time when the first canvas was painted, the first poem was written, or the first charter of demands was drafted. Technology is the means through which the right gets realised, even amplified. But the right has preceded the advent of modern technology.

Freedom of expression includes the right to seek information. It enables people to ask questions, to counter arguments, to challenge views. It means being able to inquire freely, by going to a library, by reading a book, seeing a film, watching a play, seeing art on walls.

It also includes the right to receive information – through the newspapers, books, theatre, cinema, radio, television, and the Internet. And it is the right to impart information – by saying what you think, by expressing what you believe – through any means available, including, now, the Internet.

And as the term suggests, freedom of expression is about the right to express opinions. And that means any opinion, even if wrong, even if it is offensive. Even if it is contrary to what the government says or a religious group insists.

The right is precious. Over centuries, people have gone to jail defending their rights; people have died and faced threats giving meaning to this right. People have published samizdat (or underground) literature. People have found unusual ways to buy books you can’t buy, see films you can’t see, and so on. When the Indonesian novelist Pramoedya Ananta Toer was imprisoned on the Buru island for many years, and denied any means with which he could write, he memorized the story forming in his mind, and managed to write the astonishing set of novels, called the Buru Quartet.

People have died trying to impart information. William Nygaard, the publisher of the Norwegian edition of Salman Rushdie’s novel, The Satanic Verses, was shot at; Igarashi Hitoshi, his Japanese translator, was stabbed to death.

Companies in the technology sector see themselves as great enablers of this right. They provide means for us to communicate quickly, and to many people. They provide services – microblogging sites like Twitter, email services like Gmail, social networking sites like Facebook – for free, but creating, maintaining, running and innovating those services costs money. And these companies depend on advertising. Advertisers advertise in these services because they want access to data – about the users, their preferences, and their profiles. Just as when you go to the corner grocery store and the owner knows what you like and offers you those products, or the corner bookstore where the shopkeeper knows what you like to read and points out the new novel by the writer you like, web-based supermarkets and retail companies like Amazon are able to guess what you might like because they study your past behaviour. They have the data; they analyse it and market it, in return of making your shopping experience simple.

Services like Facebook and Linked-In try to connect you to things, groups, ideas, people you know or likely to be interested in. They do it because they have access to data and are able to draw a fairly comprehensive assessment about who you are. The more you use their site – by liking posts and products – the more they know.

Some users like that; many don’t. Many like it when the bookseller knows exactly what you might be looking for. But they may not like it very much if other sellers start offering products and flooding their mailboxes. Then mail becomes junk mail.

This is why privacy is important on the Internet. The right to express is important; the right to be left alone is also important. It is a delicate balancing act – governments find it hard enough to protect it; many companies are incapable of placing the interests of the consumer at the heart of this debate. 

At the Institute for Human Rights and Business we believe the Internet and technology are great tools with which human rights can be enjoyed, protected, and advanced. The Internet enables the realization of many rights. But the more we interact on the Internet, the larger is the footprint we leave behind, and we make it easier for marketing companies, and indeed governments, to figure out who we are and what we like or dislike. We are always being monitored when we are online by someone – a market researcher, our employers, other companies, governments – our own as well as others’, and sometimes, criminal groups. If you value privacy, you have to sacrifice some convenience; if convenience is important for you, you may have to let go some of your privacy.

Who owns the data? Who decides to commercialize it? Who controls the data?

These are profound questions, and the answers are still being negotiated. It is clear that we should own our own data, and we should have a say on if it should be commercialized. We should also know who is tracking or monitoring us and why, and ideally, with our consent.

But many of these rights are violated – sometimes deliberately, sometimes inadvertently, and sometimes because technology has dual or multiple uses. Technology offers dynamism, but it can also be dangerous. It is with that in mind that we embarked on a project called Digital Dangers. Let me now turn to that.

There are six critical Digital Dangers, and these are:

  • Disconnecting or Disrupting Network Access:

    In times of civilian strife, crisis, or emergency, governments have sometimes required temporary suspension of, or disruption of access to the Internet or mobile phone services, citing harms that might follow if a bomb is detonated by a mobile phone, if there is civil unrest, or when unsubstantiated rumours are allowed to be disseminated which are alleged to possibly incite violence. As more and more people become connected, network disruption or disconnection will have a greater and increasingly dangerous impact on human rights, therefore it is critical that companies are prepared to respond to these orders in a way that does not leave them complicit in human rights violations.

  • Monitoring, Evaluating and Blocking User Content at the request of third parties (state or non-state actors):

    Governments often ask companies to monitor and evaluate user content to identify hate speech, exchange of child pornography, or other activities that might incite crime, including violence. Sometimes governments may ask for direct access to such data. In some countries, non-state actors such as religious groups, political groups, or other dominant groups demand that specific content be blocked or barred. Companies need clear frameworks to determine the extent to which they can cooperate with such requests that are in line with international human rights standards.

  • Selling dual-use technology when there is a high probability of its misuse:

    Technology often develops faster than regulators can react to their potential for negative impacts and as a result, export control may lag behind technological developments. Companies building and operating networks must abide by legal requirements to provide systems allowing “lawful interception”. If technology developed and intended to improve connectivity and security is misused, this can pose a risk to human rights, for example through censorship and privacy infringements. Companies must conduct due diligence on sales to governments and third parties to minimise the risk of misuse which can have adverse consequences for human rights, so that the technology is used for its intended purpose - providing communications that helps to realise many human rights.

    Companies could also modify through software the features and functionality of hardware products, such as network infrastructure, which then act in ways not originally intended, making it possible for the buyer to use the technology to harm human rights. Governments can do so by asking the company to modify the product or by getting other companies to build additional features into the product so that they can undertake activities that are illegal and/or cause harm, such as arbitrary surveillance.

  • Complying with government orders to impose surveillance:

    Governments sometimes ask companies to enable real time surveillance on individuals or groups, including providing the technology to intercept communications and/or recording it outside of systems provided for lawful interception. Often justified as required to prevent crime, in some countries governments use the technology to spy on people whose activities that they oppose but which are otherwise legitimate and legal.

  • Monitoring user-content under company’s own policies:

    Most companies have terms of agreements with their users, and those terms often contain phrases that restrict enjoyment of human rights. Companies’ “community standards” are often poorly defined, and left to the interpretation of company officials, who are not necessarily experts on human rights. This can lead to companies acceding to demands from non-state actors: In some countries, non-state actors, such as armed groups, militant organisations, religious groups, and others demand that certain content that offends them should be taken down. Doing so without court orders, or without a requirement from a legitimate authority undermines the human rights of the content generators.

  • Handing over stored user content and data to the state:

    Governments have often made direct requests or passed orders, sometimes backed by courts, asking companies to hand over user data (or metadata) of individuals or groups. There are allegations that governments have also sought access to such data directly or through specific tools. It is crucial there are proper checks and balances in place in order for companies to avoid being complicit in violations of privacy. Even if the government action is in good faith, companies risk losing the trust and the business of their customers if they are perceived to have acted beyond what the law requires.

The primary responsibility to protect human rights rests with the state. The state has the primary obligation under international law to be the prime duty bearer for human rights. Corporations are not state actors; they however have the responsibility to respect human rights. While the responsibility for the state includes the obligation to respect (that is, do no harm), protect (ensure that others do not harm), and fulfill (ensure that rights are realised) human rights, the corporate responsibility to respect does not only mean the limited responsibility of not harming human rights – although that is of paramount importance. It also includes the responsibility to conduct due diligence – by framing human rights policies, by conducting impact assessments, by tracking and monitoring performance, and by developing remedies to ensure that protection gaps do not exist.

Given that, how can technology companies deal with their human rights challenges and responsibilities?

They should incorporate human rights policies. They must establish procedures to ensure that rights are protected, and where infringements occur, there is a due process in place to deal with the situation. They must establish sufficient oversight so that incipient crises are not neglected. They must seek out external partners and consult actively – with consumers, users, journalists, human rights defenders, lawyers, and trade union activists, to gain a firm grasp over the human rights situation. And they should respond collectively when they face a challenge. They are not alone in this; other companies, civil society groups, and the people are in the same boat. They must face the human rights challenges together.

Myanmar is now getting wired fast. The growth will be extraordinary, and the time is not far when the remotest corner of the country may have a mast or tower connecting the community with the outside world. This represents a major opportunity for the telecommunication and Internet-based companies to expand their markets. It also imposes a significant burden on them, to ensure that the rights of those new users are respected.

Latest News

Recordings of the presentations are available to watch in English or in Thai. Click "See More" below the video for the link to the Thai version.

On 13 July, MCRB and Fair Finance Thailand co-hosted a webinar for Thai companies. Vicky Bowman, Director of MCRB, presented the human rights risks which companies currently need to take into account when doing business in Myanmar...

Launched in November 2022, the Myanmar Sustainable Business Network (MSBN) is a multistakeholder platform bringing together businesses and professionals committed to building a more sustainable economy in Myanmar through responsible business practices.

MSBN, a partnership between MCRB, United Nations Development Programme (UNDP) Myanmar and Yever, has submitted input to the consultations by the UN Global Digital Compact (UNGDC) highlighting digital rights priorities...

© 2024 Myanmar Centre for Responsible Business