OF THE
TIMES
Dr. Joann Elmore, the study's lead author and a professor of medicine at the David Geffen School of Medicine at UCLA, said in a statement:"For many diseases, data from the outpatient setting can provide an early warning to emergency departments and hospital intensive care units of what is to come.The study was posted Wednesday in the peer-reviewed Journal of Medical Internet Research.
"The majority of COVID-19 studies evaluate hospitalization data, but we also looked at the larger outpatient clinic setting, where most patients turn first for medical care when illness and symptoms arise.
"We may never truly know if these excess patients represented early and undetected COVID-19 cases in our area. But the lessons learned from this pandemic, paired with health care analytics that enable real-time surveillance of disease and symptoms, can potentially help us identify and track emerging outbreaks and future epidemics."
Guardian touts op-ed on why AI takeover won't happen as 'written by robot,' but tech-heads smell a human behind the trickSee also:
9 Sep, 2020 00:37
...
While the Guardian claims that the soulless algorithm was asked to "write an essay for us from scratch," one has to read the editor's note below the purportedly AI-penned opus to see that the issue is more complicated. It says that the machine was fed a prompt asking it to "focus on why humans have nothing to fear from AI" and had several tries at the task.
After the robot came up with as many as eight essays, which the Guardian claims were all "unique, interesting and advanced a different argument," the very human editors cherry-picked "the best part of each" to make a coherent text out of them.
Although the Guardian said that it took its op-ed team even less time to edit GPT-3's musings than articles written by humans, tech experts and online pundits have cried foul, accusing the newspaper of "overhyping" the issue and selling their own thoughts under a clickbait title.
"Editor's note: Actually, we wrote the standfirst and the rather misleading headline. Also, the robot wrote eight times this much and we organised it to make it better..." tweeted Bloomberg Tax editor Joe Stanley-Smith.
Futurist Jarno Duursma, who wrote books on the Bitcoin Blockchain and artificial intelligence, agreed, saying that to portray an essay compiled by the Guardian as written completely by a robot is exaggeration.
"Exactly. GPT-3 created eight different essays. The Guardian journalists picked the best parts of each essay (!). After this manual selection they edited the article into a coherent article. That is not the same as 'this artificial intelligent system wrote this article.'"
Science researcher and writer Martin Robbins did not mince words, accusing the Guardian of an intent to deceive its readers about the AI's actual skills.
"Watching journalists cheat to make a tech company's algorithm seem more capable than it actually is.... just.... have people learned nothing from the last decade about the importance of good coverage of machine learning?" he wrote.
Mozilla fellow Daniel Leufer was even more bold in its criticism, calling the Guardian's stunt "an absolute joke."
"Rephrase: a robot didn't write this article, but a machine learning system produced 8 substandard, barely-readable texts based on being prompted with the exact structure the Guardian wanted," he summed up. He also spared no criticism for the piece itself, describing it as a patchwork that "still reads badly."
...
The algorithm also ventured into woke territory, arguing that "Al should be treated with care and respect," and that "we need to give robots rights."
"Robots are just like us. They are made in our image," it - or perhaps the Guardian editorial board, in that instance - wrote.
Comment: