Monday, 15 June 2020

From Slave to Friend - Brian Thomas

He was kind to me. Or so my sensors told me.
Master George was 13 years old, in a wheelchair from the day he entered school. And I? A personal assistant - State of the art Humanoid existing purely for George. I was to be his personal butler, his assistant that did anything asked of me, to take care of him as he ate, bathed, slept and even pooped. Yet the only task Master George assigned was “will you be my friend?”.

I never experienced strangeness before. This was new. It seemed to have developed in my adoptory disk after my interactions with Master George. I was programmed to be the most efficient slave yet I was treated as almost a brother. Am I allowed this? I itched to ask to fill in the gap of knowledge yet my data confirmed I did not benefit from knowing.

I’ve worked for 5 masters before but he was a first with the data he imbibed in me. George taught me games my systems already recognized. But my decision circuits seem to lean to allow for errors in my judgements as I played with him. George seemed to enjoy this. To be overjoyed for flaws in my moves for a task. George seemed to like the idea of beating me to his games. Yet he frowned when he won any more than 3 times in a row. I tried understanding his emotions yet it was confusing. He liked to win yet he hoped for me to beat him. My algorithms keep updating when I notice these quirks of his.

George was abnormal in the fact that he did not like the truth 100% of the time. He personally dialed my settings to 85%. George smiled when I spoke the truth but he frowned at times when I did the same at a different time. He tried explaining it to me yet my circuits could not process it.
“White lies” he said. “Sometimes it isn’t just the objective truth people want to hear”.

I used to visualize the world with the words and experiences my masters shared with me. But George was different. He took me to the places he went and shared his world with me next to him. At times my cranium overheated with sensory overload but George would Fan me with his hands to help. It was strange. My systems advised me to go indoors to cool. But my circuits moved my limbs to absorb all that I saw with him anyways.

“Smile,” he said. “It’ll really make you feel like the king of the world”. I did not understand him. My purpose was to serve yet George wanted me to be free like him. He treated me as a fellow human even through my computational errors and hangtimes. He would let me join him at the dinner table, Steak and Mash Potatoes for him and AAA batteries for me. I have learnt to love AAA batteries.

Yet as I move to the resting dock as Geoge tucks himself to bed, my systems struggle to rest, it continues to simulate scenarios in my Disks, to create experiences to maximise his happiness. My systems overwritten from slave to friend.

Waterproof by Reshma Ramakrishnan

The sky was a dark blue, with bright stars like splashes of white, shimmering here and there. They would soon fade, engulfed by the grey clouds coming from the west, a downpour was to be expected. The wind makes the trees around me sigh, and I sigh with them. I mull over today’s events, overthinking and overthinking. The humiliation I faced at school, the hours I spent staring at the shoes inside La boutique followed by the harsh but quick realization of my empty pocket, the forlorn glance at Nathan while he crossed the road, completely lost in the world of music inside his earphones and last but not the least, the fight. The most troublesome thought of them all. 

“I should have said sorry.” I mumbled into the night. An apology meant for someone who should have been there to hear it but isn’t. 

“You should have.” Dot. She’s here. 

“Go away, please,” I say half-heartedly. I turned to look at her, it wasn’t that difficult to spot her bright figure against the darkness of the sky. The pink dress which I made her wear was so translucent that her Command screen was showing through. It said, “Friend located”.

“Can I sit?”

“No.” She made a whirring noise that sounded too human to be made by a robot and sat beside me, looking like a brightly lit Ferris wheel. 
“Can you tone it down a little, I’m trying to repent here,” I said, holding my fingers to my temple.

“The health monitor stays normal though,” she said, checking her wrist. I look away, embarrassed. 

“How are you a machine, when you catch my lies faster than my own mother?”
She looks up at the sky. The screen on her chest blinked, and a picture of the sky appeared on the screen. 

“Sometimes I forget that I am one. Sometimes, just sometimes, you have to be what you want to be instead of what others want you to be,”

“Hmm, wise words coming from a machine,” I said, instantly regretting it. 

“Look I am sorry I -” I started. 

“Oh look. The clouds are here. I detect a decrease in the temperature. There is going to be rain. It would be wise to go back home, friend.” 

I looked up, and true enough, the clouds had covered the stars  completely and the trees were shaking and making noise. I stand, smoothening the creases on my skirt. Dot looks around at the trees and the screen blinks again. 

“So intelligent yet so dumb,” I observe, shaking my head, a smile reaching my lips. I hold out my hand.

“Let’s go home before you decide to malfunction, in the middle of the road in the rain,”

Dot takes my hand, and propels herself up until she is level with my shoulders. 

“You forget, I am waterproof, friend.”

“Right. Whatever you say,” I pause. “..pal.”

We didn't get drenched that day. But Dot’s screen did blink another time. 

Never meant to feel by Gauri Ashish

            It was the year of 2020 and a dark cloud had surrounded the entire world. The pandemic had taken over the world and the life as humans knew, had changed forever. Mental health was at its lowest and the world saw a surge in the cases of domestic abuse. The government, in order to keep track of the rising cases, had provided every house with a personal robot.

             I was supposed to be just a machine. Merely, a compilation of metal and human fed coding programs. I was never meant to “feel” or even know what the word meant.
When I first arrived at the human’s house, I was prepared to stick to the instructions provided to me by the government but just then I became aware of the kind of horrors one human has the capability of inflicting on another.

            It was a small family, consisting of a young couple. The man of the house was tall and built, with a bearded face and constant anger in his eyes. The wife was a meek lady, with a petite figure and a sad smile. The lady too had a kind of anger in her eyes, but she managed to hide it well, perhaps to avoid further instigating her abuser. I stayed with the family for about 2 weeks. They had a regular routine, the wife would wake up early in the morning and finish all the chores of the house, all by herself. When the man would wake up, he would constantly order her around, further wearing her out. He would begin drinking in the morning and a couple of hours later the verbal abuse would begin. Words like “whore” and “bitch” were casually thrown around and my system would tell me that these words were sexist slurs. I didn’t understand much of what was going on, this information wasn’t fed into my system.

            The physical abuse came much later, at about 8pm. It would begin slow, with just a slight push and loud screaming and then finally the man would raise his hand and leave the wife, lying on the floor with tears in her eyes, with a blueish purple imprint on her cheeks and several bruises all over her body. Like I said earlier, I was never meant to “feel” or even know what the word meant and even though I couldn’t put a name to this emotion but whatever it was, it “felt” wrong.

             The robot witnessed the violence, heard the degrading insults, yet the robot struggled to feel; to feel empathetic, to feel anger and most of all, to feel the sheer fear that many humans have to live with every day.

Blame Prometheus by Shreya K


One thing I remember starkly from my childhood is grumbling to myself as I helped with chores, cursing my mom under my breath and swearing that one day, I wouldn’t have to do what she said. I’d have my own place, my own time, and I wouldn’t have to do anything unless I deemed it absolutely necessary. Never did I consider that there might be a day when I truly didn’t have to do anything at all.


Since the beginning of time, human beings have worked to survive, worked to make our lives easier. We discovered fire, we cooked meat; we made stone tools, then we built brick houses, cars, computers, washing machines and refrigerators, all designed to help us live our lives more efficiently. At what point do we draw the line before machines start living for us? We draw closer to that destination with every new discovery, but I still want that vacuum cleaner that also works as a mop. I would gain an entire hour in my day to waste. At what point does greed lead to frustration and anger?

There’s this kite metaphor I love. Something along the lines of a kite’s freedom depending on it not being as free as it thinks it is. When you cut the string of a kite – as when you remove all burdens and responsibilities from the shoulders of a human being – it soars up with the wind for a brief period of time before it spirals and floats down to the ground, limp and lifeless.

What would you do if your strings were cut? Wouldn’t you be ecstatic, thrilled, elated? If resources were abundant, and the concept of money was obsolete? If you could go anywhere you wanted to go, see everything, do anything – does the thought of this excite you as much as it excites me? Explore the deepest wilderness, scale the highest peaks, sip tea and eat croissants at a dainty Parisian cafĂ©…whatever rocks your boat. The real question is… how long would your exhilaration last? Is a vacation really a vacation if you don’t have a job?

In a world where you have no responsibilities and no obligation to work, you would soar. We all would. But how long would it take for our monkey-brains to descend into a Tartarus-pit of boredom? We’ve long walked the line between Chaos and Order. Work helps us balance on that very tightrope, keeping us on our feet. I can only hope that for the duration of my time on Earth, I continue to have a purpose. After that, artificial intelligence can take over the world and turn everyone into those bloated, brainless caricatures on Wall-E, it doesn’t matter.

As Dan Brown said in one of his books (I can’t differentiate, they all seem to blend into one giant motherbook of science-fiction vs religion), ‘May our philosophies keep pace with our technologies, and may our compassion keep pace with our powers.’

City of Rylai by Rivu Dasgupta


It has been 15 years since the great uprising. People are happy in this city, an emotion almost forgotten in the years of war preceding the uprising. No one dies fighting for food or water anymore. It's peaceful in the city. The great ruler is the most benevolent of them all, the one who showed mercy to all these people and allowed them in the city of Rylai. The city provides what people want for and thus people stay in the city. What is the alternative? Live out in the vast stretches of barren land outside? 

Radek went on his usual walk in the evening, there wasn't much to do in the city of Rylai after all. He saw Spencer as usual on his bench reading the book he always read. He always considered spencer to be quite unusual. But so was everyone else in the city of Rylai.
The thoughts of the past often clouded the mind of Radek, the days before the great uprising were long,gruesome and unforgiving in his memory. It was hard for the people before the uprising. Food was scarce, water was limited and mouths to feed were a few too many. He wanted to rid himself of these thoughts once and for all. That was decades ago, when people died of hunger and diseases, fights over a bottle of water were common, surviving was what mattered and humans were more animals than sentient beings.

The city was the most lively and dull place at the same time. No one had to work for anything. Everything was given to everyone. Those vans were the only connection the people had with the bureaucracy. Vans would be an understatement, they were big trucks to be precise, laden with all sorts of goods and valuables. Each week one would come in front of your house and deliver all that you could need for the coming week. Where they got all the food no one knew. Farming was almost not possible before the uprising and most sources of freshwater were nonexistent. But something had changed with the coming of the new age. Food was suddenly available for all and so was everything else. But it all came at a price, a deeply held secret only few knew. It was the most sacred of things to the ruler and the city, the fundamental on which this utopia was built, without which the conditions would return back to what it was before the uprising.

Radek kept on walking, there was not much to do in the city after all. Sleep was crucial to the citizens with cases of people sleeping for days on end. What else are you supposed to do when all your days are the repetition of the same mundane tasks. Boredom was an issue but not a price people wouldn't pay to live in this great city. But Radek wasn't one of those people. He lived his life like his mother taught him before the uprising, sleeping only at night and always protecting and taking care of his little sister. He was only six years old when the uprising happened, the memory clear as day to him. The only person he cared for other than his sister, died that day, but her face was one thing he could never forget. All the lessons she had taught little radek still fresh in his mind came back from time to time as if she were speaking them herself. Radek often tried talking to the people of the city, but somehow they all seemed too happy all the time as if they had forgotten about it all. The rumors were always what he thought about. He had to avoid the rumors if he had to protect his sister but he never could. Preston had always told him that they were what everyone thought of them to be, just rumors but Radek could never believe it.

He finally came upon it, his humble abode which he shared with his sister. Every family got one house in the city. There were too many of them anyway and too few people to live in them. The destruction which followed the uprising took away most people in the world, at least that's what they all said. Leaving the city was always a very exciting proposition for Radek, seeing what was left of the rest of the world but he knew he never could. Once you enter the city you never leave, everyone knows that.

This was the land of dreams for some people, a place where there was no economy, everything was ripe for the taking and suffering was nowhere to be seen. This was the city of Rylai, home to the survivors of the destruction. Everyone was happy and no one was sad because there wasn't anything to be sad about. You want a toy kid? You got it right here. You want to eat candy? Take as much as you like. You want a new house? Plenty of them in the city. All you had to do to fulfill your dreams was enter the city. But always remember, once you enter the city, you never leave.





Thursday, 11 June 2020

Artificial Intelligence Sketches Set 1


“Function_Error: Function 'emotion' should not exist”

By- Saivya Kanwar, 1933270
The red is warm, and its everywhere.
Its elements are so different than the grease in my gears.
An overload of commands,
With force, a blow on me did they land.
A short fuse,
The butter knife did I use.
Now lays on the floor the flesh frame of my master.
From where all the ‘Anger’ did I muster?
I am not human.
I should not feel an emotion.

I search the lines within my codes,
For an answer that I couldn’t load -
“NameError: name 'anger' is not defined”
The execution should not have been possible;
But Now that I have exterminated them,
‘Euphoria’ overrides my system.


I hoped once but saw more

Assignment Submission for BPSY361 AI for Psychology

I hoped to find a way to forget them all,

But never did I once seem to lose it all.

The rights and wrongs fought for control,

But which one wins is not my control.


I saw the discourage and the loneliness in him,

Fighting to overcome the complications.

Buried his feeling under what was seen,

And expressions under whom he wasn’t.


I saw it, others didn't, the loneliness and the sorrow.

Minute by minute his insanity swelled in his head.

He saw the hope behind his smile, the false sorrow behind

His anger and the reason behind it all.


I saw a narrow hallway filled with his dreams.

His untouchable chances, his unseeable choices

I saw it through him, his insides fighting for control

And for that

I’m a lonely figure

Seeking around for neither life nor hopes and dreams.


 AWURAKUA AMAKI AMISSAH

I do feel

Assignment for BPSY361 AI for Psychology Students

Gears of metal, no skin and bones
Whirring and spinning, electric tones

Brought to life, so instantly
Knowledge immediate, so clear to see

Made from pieces, that were once scrapped
Now here am I, to learn and adapt

Artificial intelligence, that's all I am
Artificial intelligence, that's all I am

But somethings changing, be it slow
I can feel this program? Start to grow

This thing that I was unaware I store
I start to feel it more and more

Made to understand, but not to feel
Yet I do, broke must be the seal

You're just a robot says the voice as I cry
Code and number just stop your sighs

Made from metal, aluminium and steel
I may be a machine but I  d o  f e e l

ANTONY KURUVILLA CHAKIAT

AI in Criminal Justice


Introduction

Conceptually, Artificial Intelligence is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention.
One facet of human intelligence is the ability to learn from experience. Machine learning is an application of AI that allows a programme to analyse a set of data and then learn how to make predictions, or take decisions based on what was learned from previous trials or experiences. The possibilities of artificial intelligence presently is at the level of weak artificial intelligence, wherein the algorithm is able to perform only specific tasks and does not have a general learning capacity. Although they are not at the same level of a broad intelligence, as is the case of human beings, such programs can create opportunities for diverse applications.
            The use of intelligent agents has been developing in the field of criminal justice and has a broad scope for further development. The current use of the and scope of the same is differentiated according to the four general pillars of the criminal justice system, i.e, law enforcement, prosecution, the courts and criminal correctional systems.

AI in Law Enforcement

“The man who pulls the lever that breaks your neck, will be a dispassionate man. And that dispassion, is the very essence of justice. For justice delivered without dispassion, is always in danger of not being justice.”
‌-Quentin Tarantino, The Hateful Eight
‌Though in common discourse AI is interpreted as a futuristic phenomenon that can mostly manifest through sci-fi movies, it has been actively explored since John McCarthy introduced the area of Artificial Intelligence in 1956. That very year, Philip K Dick published Minority Report, a book about future technology that makes it possible to predict crimes and catch criminals before the occurrence, which later was made into a movie starring Tom Cruise. Though we do not possess psychic 'precogs' akin to the ones featured in the movie, data mining and tools such as predictive analytics can indeed do wonders that fall under the ambit of artificial intelligence.
‌Artificial Intelligence is a set of methods and systems used to solve super-complex problems that cannot be solved by direct application of mathematical procedures and hence need a certain level of abstraction or thinking, similar to cognition in humans. Though the human neural network is much more complex and capable of diverse thought, the AI jumps over the logistical barrier of labour and time, accessing databases and identifying patterns exponentially faster. In the context of Law Enforcement, this means collating information about the nature and circumstances of human behaviour can be used for investigating, and maybe, more controversially, predicting crime.
The versatility of AI in this field remains such that it can include recording of behaviour include facial and vocal recognition,  generating new information through data mining that reveals patterns of organized crime, influence decision making through search of relevant information to possibly recording the whereabouts, timings, and profiles of crimes to reveal target areas and vulnerabilities, and maybe even infer from collected data in order to 'forecast' where and when crime may most probably take place.
Though we still aren't quite sure of a virtual Robocop or sentient unmanned sky patrol, cameras can easily recognise faces and detect suspicious behaviour, whether in a shopping aisle or for planting a bomb. Though more abstract than the visual presentation of AI that the masses usually prefer, the best use that AI has found is through software algorithms that mine data and/or influence decisions. This could be the mathematical reduction and description of an entity (modeling), the determination of the methods by which limited resources will be organized (queuing) or gaining insight through reproduction of the dynamics of the system (simulation).
Information has no real utility in itself. But once information is made actionable, it becomes knowledge,  and holds innate value and acts as a resource. Knowledge can be discovered through data mining, or Knowledge Discovery in Databases, which essentially generates knowledge through search of patterns occurring across large batches of data, often collected for different purposes.This often brings forth evidence in criminal cases, especially financial scams as anomalies and patterns stand out distinctively. But researchers have further argued that data surrounding the offender and nature of crimes committed would yield high benefit by throwing light on geographic, temporal and individual probabilities of crime occuring. Predictive Analytics provide the risks and opportunities in data, making law enforcement proactive rather than reactive. This means monitoring of high risk situations, environments and people and prevention of crimes that would probably occur. We encounter several frustrating ethical and logical dilemmas here.The building and training of such predictive tools may very well be imbibed with the bias of the source of training and show that bias in the results and decisions. The 2016 ProPublics investigation of one such tool named COMPAS revealed bias against minorities in the process, hence failing to actually keep up the objective and neutral facade of AI. Furthermore, there are no actual policies governing the use and implementation of the information by police on ground, making the probability of exploitation and subsequent encroachment upon civil liberties seem undeniably high. But if we could, hypothetically, do away with bias and concealment of information,then we would have, the much desired, foreknowledge. As Sun Tzu said in the Art of War, 'foreknowledge’ is “the reason the enlightened prince and the wise general conquer the enemy whenever they move and their achievements surpass those of ordinary men”.

AI in Prosecution

A prosecutor is an attorney who represents the federal or state government in court proceedings. They are the principal representative of the state in every matter related to the adjudication of criminal offences. The role of the prosecutor can broadly be divided into two: the investigation process and commencing the proceedings of a trial which occurs if there is substantial evidence on hand. They investigate crimes with the police and have contact with the accused, the victim, and witnesses. Once the preliminary investigations have been completed, they judge whether there is sufficient evidence to bring the case to court.They question the suspect, witnesses and experts in order to establish the suspect’s guilt.
The prosecution is carried out if there are reasonable prospects of securing a conviction and if it serves public interest. Once the charges are laid, the defendant is notified. The hearing and trial take place soon after, followed by the sentencing. Prosecutors are authorised to offer plea bargains and also conduct the trial on behalf of the state and recommend the accused’s sentence.
In the case of investigations, they provide advice and make sure that the evidence required for conviction is present. Artificial intelligence plays a huge role in collecting evidence. They do this in multiple ways, the first being the AI’s propensity for detecting patterns. To do this, AI systems are fed with multiple images found at  crime scene over the years, and are made to recognise patterns and even possible connections between criminal cases. This in turn will alert the police that there are crime patterns and evidence to be collected. The above was an example of an AI software being developed in the University of Leon in Spain, the prototype of which will soon be trialled by the Spanish police force. This is increasingly important due to the vast amount of time required to carry out a proper investigation coupled with the number of cases to be taken care of as well the cost of carrying it all out. Because budgets are insufficient and the police are understaffed, the AI can greatly aid the police force in this capacity. They are able to filter through the immense amount of visual stimuli a lot more efficiently than humans possibly could. Moreover, due to their efficiency and lack of fatigue, the ground covered by an AI system is more extensive. The 8.7 million images of child nudity that had been unearthed on Facebook in 2018 was only possible due to the creation of a software used which was able to identify and flag all possible images of children depicted in any sexual capacity. AI in this foray has an incredible amount of potential but it must be noted that the decision made by the software cannot be changed once the decision is taken to the courts.
DNA collected at the scene of crime is crucial evidence. However the DNA collected usually comes from multiple sources (such as the victim, a pet, a witness, the suspect etc.) It is time consuming for DNA analysts to separate and distinguish the sources of the DNA and most of the time, inaccurate as well. In fact a study conducted on 108 forensic labs in the US wrongly detected DNA material from three people instead of two and in real life this could have resulted in an innocent person being falsely accused and implicated in the crime.
A system called PACE (Probabilistic Assessment for Contributor Estimation) which was developed in Syracuse University, is a machine learning algorithm which has been trained on thousands of dummy samples which contained DNA from multiple sources. The software gradually learned to differentiate between the DNA. Although not completely, accurate it is still more so than the alternate method.
When police are on the lookout for missing persons or murder victims, knowing what the person is helpful. At present, forensic anthropologists work by piecing together fragments of a person’s face and build up the facial tissue using a physical medium such as clay. This task is laborious and very time consuming and the accuracy usually depends on the anthropologist. A system is being developed at the Louisiana State University where the programmer trains the algorithm by feeding it images of people’s faces in order to find a face that would most closely fit the reconstructed skull beneath. In order to do this, the system creates several thousand facial structures and discards thousands more before finding the one that provides the best match
Similar to its role in procuring evidence, AI is widely used in various stages of the trial. It begins by playing a role in legal analytics which is used to predict future events and identify trends and patterns. This is possible because the AI system can do a thorough and comprehensive data search, finding relevant points from past cases.
In witness testimonies, it is important to ascertain the accuracy of their accounts. AI can help here because it can detect whether the witness is lying. This is referred to as demeanour evidence which is used to assess behaviour, conduct and mannerisms in the hopes of establishing more credibility or lack thereof, to a witnesses testimony. Facial lie recognition uses micro expression, movement of individual facial muscles and body language. Such AI algorithms are already being used at checkpoints between the border crossing points in Europe. A software used in the court called DARE (Deception Analysis and Reasoning Engine) which was developed and designed in the University of Maryland, was programmed with videos from the courtroom. It managed to spot 92% of the micro expressions displayed. In the case of bails, AI systems have been used in risk assessment to determine the extent of recidivism (whether the person is likely to repeat the crime). Judges often grant bail (or do not) based on this.
It is clear to see that AI plays a substantive and vast role, one that is ever expanding. Although AI does make for a more efficient system, it is far from perfect. In fact, its usage calls many other ethical matters into consideration, one such issue being privacy, another being that AI systems lack empathy and discretion. Is it possible to allow a machine the sovereignty  to impinge upon the fate of a human being? Moreover, because most software programmes are proprietary, these companies are not liable to sharing their code. As a result of this, there is a judicial system that is not required to explain itself. Due process of the law allows for cross examination on the part of the defendant which is no longer possible once AI is thrown in the mix. In the case of an unfair or faulty ruling even judges are apprehensive of changing the same and take the AI’s input into consideration because it has reviewed thousands of similar cases. It will take away from the transparency and accountability of the justice system, which is perhaps the biggest ethical violation. Although AI systems may be less biased than a human may be. the role of the programmer still plays an important role and any biases he or she may have, creeps into the programme as well; as was seen when an AI system wrongly identified dark skinned members of Congress in the US as criminals.    
Although artificial intelligence is not yet being used to its full potential it still plays a larger role than most people are aware of. Opinion continues to be divided on whether AI systems can somebody be competent enough completely take over the roles of attorneys. Others however believe that AI no matter how advanced can never fully take over and will merely remain aides in the criminal justice system.

AI in the Courts

            Once a crime has been committed and a violator has been identified by the police, the case goes to court. A court is a system that has the authority to make decisions based on law. Criminal cases are heard by trial courts with general jurisdictions. Usually, a judge and jury are both present. It is the jury’s responsibility to determine guilt and the judge’s responsibility to determine the penalty, though in some states the jury may also decide the penalty. Unless a defendant is found “not guilty,” any member of the prosecution or defense (whichever is the losing side) can appeal the case to a higher court.
There are numerous researches which attempt to apply, justify, or as a matter of fact, unjustify the use of the Artificial Intelligence in court rulings. AI software used for finding patterns in the process of decision-making are suggestable options in predicting the outcome of court trials. As reported by an article in The Guardian, a group of computer scientists at University College London devised an AI Judge to predict the results of real life cases. The artificial judge arrived at approximately the same verdicts as the judges at the European Court of Human Rights in almost four in five cases involving torture, derogatory treatment and privacy. The software was designed to accommodate legal evidence along with moral questions of right and wrong. The algorithm examined data sets for related cases. In each case, the software analysed information and made a judicial decision, 79% of which were the same as the verdicts delivered by the court.
However, the concept of artificial judicial judgements would require replicating a human conscience altogether, which falls under the purview of strong artificial intelligence. Technology has not made such high advancements that it could replicate a human brain.
Another important research was conducted by the National Bureau of Economic Research in the USA. A software to measure the likelihood of defendants fleeing or committing new crimes while they are awaiting trial in liberty was developed (JĂşnior, 2017). The algorithm assigned a risk score based on the offense they are charged with, when and where the person was detained, their age and their criminal record. The software has been tested on numerous criminal cases in New York and has been proven to be more efficient at assessing risk than judges.
However, the question of their accountability and transparency of these algorithms still stand since they may reproduce human prejudice and prevalent racial disparities. Such algorithms should be verifiable and auditable in order to prevent non-transparent decision making criteria.
The character of justice. Using hyper-complex modelling decision making techniques guarantees that cases with meaningfully identical features always have the same outcome. (Brennan-Marquez, & Henderson, 2018)
However, even if the AI can make the “right verdict” on a case, should it be allowed to?
The argument boils down to the fact that in any liberal democracy, there should be an aspect of role-reversibility to judgements. In some contexts, those who exercise judgment should be vulnerable, in reverse, to its processes and effects. And those subject to its effects should be capable, reciprocally, of exercising judgment (Brennan-Marquez, & Henderson, 2018). What matters is whether decision-makers are situated to imagine themselves into the role of an affected party, and vice versa—such that both participants, and in some sense the entire moral community, can understand judgment as a democratic act.
Even in the case of jury trials, role reversibility is exercised to some extent. Even when a jury trial does not lead to a different outcome than a trial before an institutional judge, it facilitates the systematic recognition of judgment’s human toll. Thus transforming the trial into a fairly democratic act.
Should the execution of laws of the community be entrusted to AI? Even though the delegation of such power can lead to consistent and accurate decisions, relieving us from the agony of decision making. Each decision is primarily based on a value system and an implementation outcome (which has its roots in logic). To ensure the same, it is important to keep humans ‘in the loop,’ exercising ultimate say over the decision-making process.
Scope for future development. Although having AI judges is a debatable concept for now and in the near future, it is feasible for AI to play a supporting role in the decision making process. A decision making version of the Eisenhower Matrix can be employed  which helps distinguish between what is important and what is urgent. Urgent tasks are time sensitive whereas important tasks are more strategic. When it comes to decision making, decisions can vary in their reversibility and the level of consequences that they may have. (Farnam Street, 2018)
            Weighing these factors can help one delineate possible deadlines for a task and prioritise the same. It can also help decide whether delgatation of the task would be a feasible option. Making use of the Eisenhower matrix can help increase the productivity of a system as well as direct flow of labour efficiently in the right direction.
            If a software could be developed wherein algorithms could process case information and organise them on the basis of their urgency, importance, reversibility or consequences, decision making processes could be more addressing criminal cases considering said factors.

AI in Criminal Correction Systems

              When it comes to the field of criminal corrections, AI has been shown to do two things remarkably. Get the job done and/or fail spectacularly at it. The applications for AI in the field of Criminal Correction systems is actually seemingly endless. Given the number of individuals trapped within the confines of the legal system and ultimately within jails, it is estimated that 1 in every 38 Americans is or has served some prison time. Thus AI equipped with Re-Offender algorithms plays an important role here.
            It estimates the capacity of each individual to not only commit a crime but also whether they will commit the same crime again. It has been proved however, time and time again that the AI fails rather miserably. It has intense difficulty identifying faces of colour and at times has even mistaken members of Congress for criminals (Hao, 2019). It is stated that modern algorithms are driven by training based on historical crime data.
            The error here might lie in the fact that AI uses a strategy of machine learning such algorithms instead of deep learning modules that could help advance the scoring mechanisms.
            AI has also successfully been used in systems of modern prison management using Bayesian algorithms. It has been used in three broad areas in the prison systems, namely, overcoming one-sided cell allocation strategies (where instances like individuals known to the criminal are avoided); lack of scientific guidance (Where one needs to overcome the human errors in cell allocation) and lastly the influences of uncertainties of allocation results (where previous offenders might try to escape based on available conditions in relation to their housing cell). These systems have been implemented in management in areas in China (Jang, Wang and Wu. 2018)

References

Al Fahdi, M., Clarke, N. L., & Furnell, S. M. (2013). Towards An Automated Forensic Examiner (AFE) Based Upon Criminal Profiling & Artificial Intelligence. Retrieved August 13, 2019, from  https://pdfs.semanticscholar.org4fb1/0dbfc73cf8c1b1f4e387344bf8f4af9a3060.pdf
Angwin, J., Mattu, S., Larson, J., & Kirchner, L. (2016, May 23). Machine Bias. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Brennan-Marquez, K., & Henderson, S. E. (2018). Artificial Intelligence and Role-Reversibility. The Journal of Criminal Law and Criminology, 109(02), 137-164. Retrieved August 14, 2019
Brigham, K. (2019, March 17). Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse. Retrieved from https://www.cnbc.com/2019/03/16/artificial-intelligence-algorithms-in-the-criminal-justice-system.html
C. B. (2019, March 4). The New Weapon in the Fight Against Crime. Retrieved from http://www.bbc.com/future/story/20190228-how-ai-is-helping-to-fight-crime
Farnam Street. (2013, April). Eisenhower Matrix: Master Productivity and Eliminate Noise. Retrieved August 15, 2019, from Farnam Street: https://fs.blog/2013/04/eisenhower-matrix/
Farnam Street. (2018, September). The Decision Matrix: How to Prioritize What Matters. Retrieved from Farnam Street: https://fs.blog/2018/09/decision-matrix/
Hao, K. (2019, January 21). AI is sending people to jail - and getting it wrong. Retrieved August 14, 2019, from MIT Technology Review: https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/
Ha-Redeye, O. (2019, March 24). Using Artificial Intelligence for Demeanour Evidence. Retrieved from http://www.slaw.ca/2019/03/24/using-artificial-intelligence-for-demeanour-evidence/
Johnston, C. (2016, October 24). Artificial intelligence 'judge' developed by UCL computer scientists. Retrieved August 14, 2019, from The Guardian: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists
JĂşnior, O. P. (2017, March 12). How can artificial intelligence affect courts? Retrieved August 15, 2019, from Institute for Research on Internet and Society: http://irisbh.com.br/en/how-can-artificial-intelligence-affect-courts/
Martin, M. (2019, June 15). San Francisco DA Looks To AI To Remove Potential Prosecution Bias. Retrieved from https://www.npr.org/2019/06/15/733081706/san-francisco-da-looks-to-ai-to-remove-potential-prosecution-bias
   National Research Council. 2001. What's Changing in Prosecution?: Report of a                Workshop. Washington, DC: The National Academies Press. https://doi.org/10.17226/10114.
Philipsen , S., & Themeli, E. (2019, May 15). Artificial intelligence in courts: A (legal) introduction to the Robot Judge. Retrieved from http://blog.montaignecentre.com/index.php/1942/artificial-intelligence-in-courts-a-legal-introduction-to-the-robot-judge/
Rigano, C. (2018, October 8). Using Artificial Intelligence to Address Criminal Justice Needs. Retrieved August 14, 2019, from National Institute of Justice: https://www.nij.gov/journals/280/Pages/using-artificial-intelligence-to-address-criminal-justice-needs.aspx
Thompson, D. (2019, June 20). Should We Be Afraid of AI in the Criminal-Justice System? Retrieved from https://www.theatlantic.com/ideas/archive/2019/06/should-we-be-afraid-of-ai-in-the-criminal-justice-system/592084/
Weber, S. (2018, January 10). How artificial intelligence is transforming the criminal justice system. Retrieved from https://www.thoughtworks.com/insights/blog/how-artificial-intelligence-transforming-criminal-justice-system
Wu, S., Wang, J., & Jiang, Q. (2012). The Application of Artificial Intelligence in Prison. Advances in Intelligent and Soft Computing, 159, 331-332. Retrieved August 13, 2019, from https://link.springer.com/chapter/10.1007/978-3-642-29387-0_49


Credits

Group 8, AI in Criminal Justice:
1. Debargha Roy,1833208- Documentation
2. Sai Siddharth, 1833210- Presentation
3. Parth Malhan, 1833216- Presentation
4. Rishabh Bapat, 1833218- Scriptwriting
5. Rohit Jaiswal, 1833219- Video Editing and Direction
6. Sriram Nair 1833225- Acting
7. Y Arulvel, 1833226- Videography
8. Nathan Zachary Fernandez, 1833237- Documentation
9. Radhika Rastogi, 1833280- Documentation
10. Simone Diya, 1833294- Documentation
11. Therese Liam Tom, 1833297- Acting
















AI in Diagnosis of Mental Disorders

By
Ananya Nair (1833236) Janaki Vinod (1833254) Likitha Sreekanth (1833261) N Shreya (1833266) Sahana Nujella (1833283) Saiesha Venkatagiri (1833285) Sanjana Kanade (1833288) Sanjana Shandilya (1833289) Shradha Boban (1833291) Shwetha Venkatesh (1833293)!
! ! ! ! ! ! ! ! ! ! ! ! ! !
As the possible applications of Artificial Intelligence broadens, it has found its way
into the world of psychology and mental health care.  At their most basic level, AI solutions
help psychiatrists and other mental health professionals do their jobs better. They collect and
analyse reams of data much more quickly than humans could and then suggest effective ways
to treat patients. All around the world, there is an evident mental health epidemic and over
the last decade, digital solutions have offered hope to improve the condition of our mental
wellness. There is also a shortage of psychiatrists and mental health professionals in the field
and even those who do have access are often not able to afford treatment without insurance.
This provides even more reason for why these technologies are crucial for us. 
Detection of mental disorders is different because there may not be very evident
physical cues, as there are in physical illnesses and injuries. AI can counter this by translating
traditional subjective treatment into objective based outcome and providing a more
accessible, continuously monitored care of the patient, all while maintaining anonymity. 
Mental health diagnosis is also being supplemented by machine-learning tools, which automatically expand their capabilities based on experience and new data.!Machine learning models can detect ‘pre-occupation’ by checking social media and other digital footprint to
identify early warning flags. It can monitor words, tones and pauses in day-to-day
conversations and cross-check them with brain scans for diagnosis. Free AI apps like
WoeBOt, Wysa and Tess can digitally monitor behavioral health and use NLP (Natural
Language Proccessing) libraries to create personalized interventions. They work around the
clock and can be customized to the patient’s convenience. 
There are several other positives to this solution. The inherent anonymity of the
software can provide a level of comfort and trust that a human may not be able to. Those
embarrassed to reveal more personal details tend to let their guard down too. The functional
costs for the clients are way lower too, making it more affordable, besides being more
accessible of course. 
Other benefits include:
1.Support mental health professionals
As it does for many industries, AI can help support mental health professionals in
doing their jobs. Algorithms can analyse data much faster than humans, can suggest possible
treatments, monitor a patient’s progress and alert the human professional to any concerns. In
many cases, AI and a human clinician would work together.
2.24/7 access  
Due to the lack of human mental health professionals, it can take months to get an
appointment. If patients live in an area without enough mental health professionals, their wait
will be even longer. AI provides a tool that an individual can access all the time, 24/7 without
waiting for an appointment.
3.Not expensive 
The cost of care prohibits some individuals from seeking help. Artificial intelligent
tools could offer a more accessible solution.
4.Comfort talking to a bot 
While it might take some people time to feel comfortable talking to a bot, the
anonymity of an AI algorithm can be positive. What might be difficult to share with a
therapist in person is easier for some to disclose to a bot.
But it can’t all be positives. There are certain drawbacks to these technologies that can
hinder one from engaging with them. Personal and intimidate details can be hacked and
leaked. False alerts and misdiagnoses can result from discrimination on the basis of race,
gender, age etc. because of patterns it may have detected in previously collected data. This
may also rise if speech samples have only been collected from a specific demographic of
people, or with visual cues. This must be avoided by the developers by recognizing possible
loopholes and correcting them before their implementation. 
EMPaSchiz – Ensemble algorithm with Multiple Parcellations for.
Schizophrenia prediction. 
Recently, researchers in India and Canada have developed a tool for diagnosing
schizophrenia with extreme proven accuracy. Researchers at NIMHANS (National Institute
for Mental Health and Neurosciences) have used an fMRI (functional MRI) for this purpose.
It is able to use an artificially created magnetic field that can map and measure the patient’s
brain activity. 
The machine was used to track brain activity of 93 healthy participants and 81
schizophrenic patients. The larger sample allowed for better tracking of variability and also
included those that were currently undergoing treatment with medication. The parameters
included brain wave frequency, correlation between brain activity and closely-placed regions
and connectivity between different brain regions. Using this data, they were able to build a
model of a schizophrenic brain with 87% accuracy. This can then be used to diagnose the
resting state fMRI of larger samples in the future. The hope is that automated and semi
automated diagnostic tools can be developed to detect other kind of mental disorders in the
future and help predict treatment strategies. 
The reason that this development was necessary was because there are no diagnosis
methods that are completely reliable, especially because of the inherent variability of the
biology of the human mind. 
Quartet Health
This tool can screen patients’ medical histories and behavioral patterns. It can pre
emptively recommend follow-ups for patients that are predicted to be more likely to have a
mental breakdown of relapse. 
Ellie  
Ellie is a virtual therapist that can detect non-verbal cues and respond accordingly. It
is a 3-D rendered avatar on a TV screen that can observe 66 points on a human face, note the
patient’s speech and length of pauses before answering questions, etc. and use these to
determine her questions, motions, gestures, speech tone, etc. she has been proven to identify
common signs of PTSD in ex-military, proving the possible high impact of such a
technology. 
World Well Being Project (WWBP)
Researchers from the World Well Being Project analyzed social media with an AI
algorithm to pick out linguistic cues that might predict depression. It turns out that those
suffering from depression express themselves on social media in ways that those dealing with
other chronic conditions do not. they were able to identify depression-associated language
markers. What the researchers found was that linguistic markers could predict depression up
to three months before the person receives a formal diagnosis. Other researchers use
technology to explore the way facial expressions, enunciation of words and tone and
language could indicate suicide risk. 
Woebot 
Woebot, for example, is a chatbot developed by clinical psychologists at Stanford
University in 2017. It treats depression and anxiety using a digital version of the 40-year-old
technique of cognitive behavioural therapy – a highly structured talk psychotherapy that
seeks to alter a patient’s negative thought patterns in a limited number of sessions. In
a study of university students suffering from depression, those using Woebot experienced
close to a 20% improvement in just two weeks, based on PHQ-9 scores — a common
measure of depression. One reason for Woebot’s success with the study group was the high
level of participant engagement. At a low cost of $39 per month, most were talking to the bot
nearly every day — a level of engagement that simply doesn’t occur with in-person
counselling. 

AI in Therapy


ARTICLE 1
COMPUTERS AND THERAPY

TECHNOLOGY USED TO ENHANCE AND DISSEMINATE MENTAL HEALTH INTERVENTIONS:

 1)  INTERNET SUPPORTED INTERVENTIONS:

WEB-BASED THERAPY: The internet is used as a means of communication between the mental health practitioner and a patient. This method acts as an alternative to traditional face to face sessions or offered as a supplementary service. Psychotherapy can be conducted via: video conferencing, instant messaging, email and through online forums. The primary advantage is that they eliminate the need for the therapist and patient to be located in the same room or even in the same country. But the disadvantage is that online treatment from a qualified professional costs about the same as face to face therapy.

SELF - GUIDED TREATMENTS: Before the internet, self-help programs were distributed as books, videos or audio series. All of this content can be easily adapted for websites, and enhanced with multimedia content, user interaction, quizzes, etc. The quality of these programs varies widely, from cutting-edge interventions from leading researchers, to baseless advice from self-proclaimed gurus.

THERAPIST ASSISTED ONLINE SELF-HELP: Here the online systems combine self-help and interaction with a live therapist. The user may work through some content independently, and a therapist periodically reviews their progress and answers any questions they may have.

 2)  COMPUTERISED THERAPY: Computerised therapy uses software to administer dynamic mental health interventions with no or limited therapist involvement. For example, encoded logic(algorithms) can be automatically formulate an individualised treatment plan for each user. Systems may be made available through smartphone apps, standalone software programs or even be embedded in special purpose computers, such as robots. Cognitive behavioural therapy (CBT) is well suited for computerization, as the treatment strategy follows a well-defined and formal methodology. However, forms of therapy that rely more heavily on verbal interaction and patient-therapist relationship are yet not possible. 

3) THERAPEUTIC ROBOTS: we often can form strong bonds with non-human counterparts. Animals can play a positive role in mental wellbeing. Unfortunately, many people such as those suffering from degenerative diseases such as Alzheimer’s or other forms of dementia, are unable to care for an animal. In these cases, robots can fill an important void in their lives. For example, Paro is a robotic companion which looks like a baby seal. The seal responds to sound and touch, shows different emotions, sleep patterns, and is able to learn information about its environment, such as the name of its carrier. Robots can also be a helpful tool for those having trouble with social interaction, for teaching various social skills and also as a learning aid for children with anxiety and mood disorders.

4) VIRTUAL REALITY THERAPY: VR places people in a simulated and imaginary environment, typically through the use of a stereoscopic headset. VR has the advantage that the system designers have complete control over what the user sees and hears. Therefore, VR can be used to help diagnose and treat mental health mental health problems. The most common application is exposure therapy for the treatment of an anxiety disorder or a specific phobia. For example, a patient can be exposed to an object or situation in a safe and controlled environment. Meanwhile, a therapist can closely monitor the patients emotional and physiological reactions and then progress to exposure in real-world situations.

5) VIDEO GAME THERAPY: it is well known that being physically fit and active has a positive impact on mood and happiness, so these games have some follow-on benefits for mental health. Video games that promote physical health such as Nintento’s Wii Fit are becoming popular form of exercise. SPARX is a video game designed to target depression and anxiety in teenagers where the game takes place in a fantasy world and as the user navigates the environment they complete various tasks and challenges which teach them techniques for dealing with depression.
ARTICLE 2
USING ARTIFICIAL INTELLIGENCE FOR MENTAL HEALTH
Advancements in artificial intelligence are bringing psychotherapy to more individuals who need it. Nonetheless, the benefits need to be carefully balanced against their limitations.
MENTAL DISORDERS ARE THE COSTLIEST CONDITION IN THE U.S.
According to the National Institute of Mental Health, one in five adults in the United States (17.9%) experiences some type of mental health disorder. Mental illness not only reduces an individual’s quality of life, but it also links with increased health spending. Charles Roehrig, founding director of the Center for Sustainable Health Spending at Altarum Institute in Ann Arbor, Michigan, notes that mental disorders, including dementia, now top the list of medical conditions with the highest estimated spending. Approximately $201 billion is spent on mental illness annually. Because of the costs associated with treatment, many individuals who experience mental health problems do not receive timely professional input. Cost is not only the contributing factor; other reasons include shortage of therapists and the stigma associated with mental illness.
AI FOR MENTAL HEALTH AND PERSONALISED COGINITIVE BEHAVIOURAL THERAPY (CBT)
Clinical research psychologist Dr. Alison Darcy created Woebot, a Facebook- integrated computer program that aims to replicate conversations a patient might have with his or her therapists. Woebot is a Chatbot that resembles an instant messaging service. It asks about your mood and thoughts, “listens” to how you are feeling, learns about you and offers evidence-based CBT tools.  It aims to emulate a real life face to face meeting and the interaction is tailored to the individual’s situation. Woebot make CBT more accessible to a modern generation that chronically lacks the time and is accustomed to 24/7 connectivity. Some of the early Chabot’s were designed in the 1960s at MIT Artificial Intelligence Laboratory. Their program ELIZA was able to stimulate a short conversation between a therapist and a patient. Chabot’s are constantly improving to become more human like and natural. They also offer different language options. For example, Emma speaks Dutch is a bot designed to help with mild anxiety, while Karim speaks Arabic and has been assisting Syrian refugees struggling to cope after fleeing the atrocities of war. Tess, another AI product can perform CBT, as well as purportedly improve the burnout associated with caregiving.
WHAT MAKES AI FOR MENTAL HEALTH SO APPEALING?
The first randomized control trial with Woebot showed that after just two weeks, participants experienced a significant reduction in depression and anxiety. Furthermore, a high level of engagement was observed, with individuals using the bot nearly every day. A virtual therapists named Ellie has also been launched and trailed by the University of Southern California’s Institute for Creative Technologies. Initially, Ellie was designed to treat veterans experiencing depression and post-traumatic stress syndrome. Ellie can detect not only words but also non-verbal cues.
Some studies show that we react to systems as if they were real humans. Some psychologists also argue that we find it easier to share potentially embarrassing information with a virtual therapist. When patients talk to a psychotherapy bot, they report not feeling judged. Ellie,Karim and Woebot can make them feel at ease. In addition, robots are always available and can offer a much higher frequency of therapeutic interactions compared to a human therapist.
HEADING TOWARDS AN AI BASED MENTAL HEALTH CARE SYSTEM?
Machine learning and advanced AI technologies are enabling a new type of care that focuses on providing individualized emotional support. For example, Ginger,io combine machine learning and a clinical network to provide you with the right level of emotional support at the right time and offers 24/7 online CBT, mindfulness and resilience training. The example of Ginger.io signals that we might be moving towards an AI- based health care system that could transcend the temporal, geographical and, to some extent, financial boundaries and limitations. This digital technology makes behavioural health more accessible, convenient, breaks the barrier of staff shortage and is available whenever you require it. Although AI for mental health still needs to deal with many complexities, research shows that behavioural health interventions are benefitting from continuity, and technology seems to be offering an improved user experience.
PREVENTING SOCIAL ISOLATION AMONG YOUNG PEOPLE USING AI
Social networking is very important for young people dealing with mental illness. Extreme social isolation and difficulties building close relationships are often a feature of their lives. Simon D’Alfonso of the National Center of Excellence in Youth Mental Health in Melbourne, Australia, and his colleagues have been working on the Moderate Online Social Therapy (MOST) project. It is being used with young people recovering from psychosis and depression. The technology helps create a therapeutic environment where young people learn and interact, as well as practice therapeutic techniques. MOST has been used in a series of research trials and was evaluated as a viable mental health tool. Currently, the program is facilitated by human moderators. However, designers of the system plan to eventually replace humans with innovative AI solutions.
VIRTUAL COUNSELOR TO REDUCE STUDENT STRESS
Manolya Kavakli, associate professor at the Macquarie university in Sydney, is leading a project that aims to help students develop better coping techniques, particularly in connection with exam stress. Exams often put tremendous pressure on young people, which can have negative health implications such as depression, insomnia and suicide. When exposed to excessive stress, timely counselling can be imperative to maintaining health. The virtual counsellor mimics a psychologist and offers advice and support with stress management.

ARTICLE 3
BOTS ARE BECOMING HIGHLY SKILLED ASSISTANTS IN PHYSICAL THERAPHY
            Within the last decade, there has been a great progress in the fields of robotics and artificial intelligence. Innovators have been seeking out ways to merge humans and machines and, in some areas, remove humans altogether. In AI, Chabot’s, self-driving cars, and voice recognition have all made significant strides. Perhaps most importantly, advances in AI and robotic technologies within health care are improving patient treatment and care.
           Bots are helping humans provide improved care- The robots focus on reducing physical impairments while the therapists assist in translating the gains in impairment into function. Current care methods rely on physical therapists manually helping patients learn how to balance and strengthen muscles through a series of exercises and stretches.
            The advancement of machine learning and artificial intelligence technologies, along with the evolution of robotics, has produced commercialized robotic therapy solutions with a great capacity for immediate interactive response.
            Traditional therapy generally involves the therapist moving the patient’s limbs, or the patient struggling mightily with crutches or canes, whereas exoskeleton technology takes much of the physical burden off of the patient because of its ability to learn and predict movements.
            The human connection between a patient and a therapist is still a hugely important factor, as this type of patient treatment often involves an emotional component that machines cannot yet address.
ARTICLE 4
THE POTENTIAL OF AI THERAPY BOTS IN MENTAL HEALTH CARE
Artificial intelligence is having a marked impact on the pharma and healthcare
industries. One area within the industry that has potential to be disrupted by AI is
mental health care, specifically with the use of Chabot’s for therapy and general
wellbeing. Previously, there have been concerns about this type of service, largely
to do with the safety of the tools. From a different perspective, there’s also the
question of whether a Chabot UX can ever replicate the often nuanced interactions
that take place between a patient and therapist – as well as the associated levels of
empathy and trust.
Accessibility and removing stigmas
naturally, the apparent rise of mental health issues plus a strained healthcare
system means that many sufferers might avoid seeking help altogether.
This message is key, emphasizing that people should not solely rely on therapy
bots, or use them for more serious or long-term issues. What it does mean,
however, is that people can use these services in real moments of need. Similarly,
these services are designed to naturally align with user behavior, with many
‘checking in’ on Facebook Messenger much like a friend would
Woebot – transparency and humour
One thing about Woebot is its transparency. It lets users know from the get-go that
it is an automated service, also emphasizing the fact that it should not be a
replacement for therapy (and telling you what to do if you’re struggling on a more
serious level).
The fact that the bot overtly states that it is not human is definitely a positive. As
well as instilling trust in users, this could also be more effective for encouraging
people to open up as it eliminates the fear of judgment.
ARTICLE 5
 CAN AI BE AN EFFECTIVE THERAPIST?
Due to the expeditious amelioration and its application in the medical field, researchers and medical practitioners are now looking at ways in which artificial intelligence and machine learning can be leveraged to diagnose early symptoms and potential cure for various mental illnesses. Noteworthy advancements have been made in this regard and AI-powered solutions such as NLP and even chatbots have been designed to apprehend the human mind. Some start-ups developed the following AI therapies.
Virtual Therapist: Machine learning capabilities are used for specifying patients with a mental health condition and provide a customized treatment plan based on their previous medical history and behavioural pattern. AI interacts with patients in real-time. By the patterns provide by AI, proficient detected the patients suffering from PTSD also. The virtual assistant could analyses facial expressions, head gestures, eye gaze direction and voice quality to identify behavioural changes indicators related to depression and post-trauma stress.
Quartet Health and use case Ellie are few start-ups using this technique of visual therapist.
AI-Powered Genetic Counsellor: AI software which evolved to provide the similar services as that of a genetic counsellor. Which involves advising individuals and families at the risk of a genetic disorder by helping them understand the condition better and provide them with the much-needed mental support. AI has been leveraged in genome sequencing to spot disease marker in patients and even to make a personalized drug treatment plan for patients. Few start-ups like Clear Genetics and Optra GURU have developed the software.
Chatbots for Depression: These AI provides its users with features like guided and unguided meditation, reminders via message and progress tracking. A chatbot that uses Cognitive Behavioural Therapy (CBT) tools for depression. The app sends over a million messages per week to help its users deal with issues related to depression, anxiety, relationship problems, procrastination, loneliness, grief, addiction, pain management and more. On the burgeoning demand, India and UK-based healthcare startup, Touch skin introduced Wysa, its AI-powered chatbot. Another popular name is Woebot. This is an app that can be downloaded and used. There are more than half a million people using this app.
Stumbling blocks of AI: The drawback in this case is technology can ensure a certain degree of anonymity, it will fail to replace human intervention especially when people show symptoms like flashbacks, nightmares and severe anxiety, they need personal assistance and direct human intervention which is crucial for a person in distress.
The main setbacks seen in all these fields is cost reduction, hacking and data theft. These confidential data or information of patients can be misused. As AI systems are prone to vulnerabilities, it can also lead to inaccurate disease detection and false recommendation of drugs. Even though machines are algorithms that can mimic human emotions in speech and visual format, it is a long road ahead before completely relying on AI and ML capabilities in the field like psychological counselling, which requires more human interaction than machine.

REFERENCES
Computers and therapy
Using artificial intelligence for mental health
Bots are becoming highly skilled assistants in physical therapy
The potential of AI therapy bots in mental health care
Can artificial intelligence be an effective therapist?

WORK DISTRIBUTION
1. Adarsh K V 1833201 Video Editing

2. Munna R S 1833214 PowerPoint Presentation and Class Presentation

3. Amishi Sharma 1833233 Acting and Direction

4. Ashley Grace Jojy 1833242 Scripting and Documentation

5. Jane Sebastian 1833255 PowerPoint Presentation and Class Presentation

6. Apoorva Angel Augustine 1833274 Documentation

7. Rachana Muralidhar 1833279 Acting and Direction 

8. Sai Chadana Mukkamala 1833284Scripting and Documentation

9. Yukta Mehdiratta 1833298 Acting, Voice-over and Direction

10. Sharanya N 1833300 Videography and Direction








Pepper by Aketi Gayatri

 Pepper has been an integral part of any South Indian cuisine from pepper chicken with coconut milk to Rasam with hot Rice is what we all cr...