Now available: “Mannheimer Swartling’s Concise Guide to Arbitration in Sweden” in Russian
The second edition of Mannheimer Swartling’s guide, that provides the essentials for anyone participating in arbitration in Sweden, is now available in a Russian translation.
The book offers thorough guidance – from the basis of the arbitration agreement, through the appointment of arbitrators and the conduct of the arbitral process, to the making, challenging and enforcement of the award. It addresses commercial arbitration as well as investment arbitration, and provides essential guidance on those aspects of contract law which will usually be relevant in a contractual dispute governed by Swedish substantive law. Finally, it provides practical information for lawyers visiting Sweden.
The translation was made by Russian ADR Lawyers, led by Tatiana Mikhaleva, at Mannheimer Swartling’s Moscow office.
Earlier this February, international and Swedish arbitration experts celebrated the book release together with authors Jakob Ragnwaldh, Fredrik Andersson and Celeste Salinas Quero at an SCC evening event.
The event provided the attendants the rare opportunity to have the highly awaited Guide introduced by the authors themselves. The Guide, the first article by article commentary to the SCC Arbitration Rules for an international audience, offers a thorough and user-friendly guidance to proceedings under the 2017 SCC Arbitration Rules, from the filing of the request to the termination of the arbitration.
After the introduction of the Guide, the authors mingled with the other guests during a reception with drinks and canapés.
I want to live in a world where all disputes can be resolved fairly by all relevant facts and legislation in a matter of seconds. Certain descriptions of AI make promises that one day I will. These are equal parts utopia and dystopia, but we are not there yet.
Today’s AI applications are made to order, meaning they are made to solve specific problems using specific data. This may change in the future, but for now, the problems it can solve are not yet as broad as “decide this case”, but rather “find this specific type of document in an enormous database” or “find me relevant case law on this topic.” It still only assists and not replaces humans in the judicial process.
Very simplified, AI solves problems through training on datasets, which means analysing vast amounts of data to understand trends and replicating behaviours. In our field, this data could be facts from previous cases and awards. The challenges of replicating potential human bias in such datasets are well known and discussed. Human bias is unfortunately predictable, which makes it possible to find and correct for.
There are other blind spots of AI that are harder to detect.
Generally, we don’t know exactly which data AI bases its conclusions on. If you scan a large enough data set for trends, you can often find strange correlations. For instance, there is a 95% correlation between the per capita cheese consumption in the US and the number of people being strangled by their bedsheets, not to mention the 99% correlation between divorce rate in Maine and the amount of margarine consumed in the years 2000-2009. AI would not necessarily know to disregard these correlations. It can’t determine which facts are relevant—the difference between correlation and causation. This is often referred to as the black box problem of AI.
In a University of California experiment, AI was trained to differentiate huskies from wolves. It tested well but generated some strange mistakes. When examined further, it turned out that in the training data, all pictures of wolves had snow in the background, whereas the pictures of huskies did not. Hence, it concluded that wolves are four-legged, hairy creatures that walk on snow, and dogs are those that don’t.
The ability to separate relevant from not relevant is arguably a core strength of the human mind over AI.
The first areas where we are starting to see AI being applied in disputes are non-complex, repetitive matters with large volumes of cases. Take the example of parking tickets appeals. There are interesting AI applications already in this field. Without going into the specifics of any existing application, imagine that you train the AI on previous parking violations to teach it correct and incorrect parking. Since we don’t know what data it assesses, we don’t know whether it looked at all the pictures and concluded that you can only have a parking violation when it’s sunny or when the car is photographed from behind.
Naturally, this lack of transparency is not ideal in the judiciary world.
These are well-known challenges in AI, and there’s a movement towards Explainable AI, where the result and methodology of AI can be understood by human experts. Provided with better transparency in the methodology, we might be able to get at least one step closer to that fair and unbiased super judge I described in the beginning. But there are still many steps to go.