Auto plant closures tied to surge in opioid overdose deaths

(Reuters Health) – Opioid overdose deaths have spiked in the wake of automotive assembly plant closures across the U.S. South and Midwest, a new study suggests.

FILE PHOTO: A syringe filled a narcotic, an empty syringe and a spoon sit on the roof of a car, where a man in his 20’s overdosed on opioids in Lynn, Massachusetts, U.S., August 14, 2017. Picture taken August 14, 2017. REUTERS/Brian Snyder

Plant closures were associated with an 85% surge in opioid overdose mortality rates among working-age adults five years later, compared with what would have been expected if these factories had remained open, researchers report in JAMA Internal Medicine.

“We found that automotive assembly plant closures – which led to dramatic reductions in economic opportunities in manufacturing for individuals living in those areas – were strongly associated with poor health outcomes, specifically higher opioid overdose death rates,” said lead author Dr. Atheendar Venkataramani of the Perelman School of Medicine at the University of Pennsylvania in Philadelphia.

“The fading American dream may be more than an economic problem – it may also adversely affect America’s health,” Venkataramani said by email.

Venkataramani’s team examined opioid-related deaths from 1999 to 2016 in 112 manufacturing counties near major automotive assembly plants. At the start of the study, 2.7% of adults aged 18 to 65 lived in these counties.

During the study period, 3.4% of opioid deaths nationwide occurred in these counties, including 29 counties that experienced plant closures and 83 that did not.

At the start of the study period, opioid overdose death rates were similar in all of these manufacturing counties at roughly 1 per 100,000 population.

But where plants closed, there were 8.6 more deaths for every 100,000 people five years later compared with counties where factories remained open.

White male young adults were hardest hit. Five years after plant closures, there were 20.1 more opioid-related deaths per 100,000 among white men ages 18 to 34 and 12.8 more opioid deaths per 100,000 among white men ages 35 to 65, compared to counties without plant closures.

Younger white women were hard-hit too. There were 6.4 more opioid overdose deaths per 100,000 among white women ages 18 to 34 five years after plants closed; deaths also rose for older women but the difference was too small to rule out the possibility that it was due to chance.

The authors note that although the study shows a large, robust association between plant closures and fatal opioid overdoses, the closures are not the only cause of the opioid crisis.

The supply of drugs plays a major role, the study team notes, and many efforts to combat the opioid crisis during the study period focused on curbing opioid prescriptions. In the wake of plant closures, however, overdose death rates rose for both prescription and street drugs.

At the same time, drug overdoses are increasingly seen as “deaths of despair” not unlike fatalities from smoking and drinking, which tend to rise during economic downturns, the study team points out.

“When an automotive plant closes, thousands of people may lose jobs that provide economic opportunity, community and stability,” said Dr. Michael Barnett, an assistant professor of health policy and management at the Harvard T. H. Chan School of Public Health in Boston.

“There could be many factors connecting this to opioid use and addiction: worsening mental health, loss of access to health care, fewer avenues to engage in community outside of substances,” Barnett said by email. “It is very difficult to say which of the many possibilities is most important.”

The findings may not be unique to the auto industry, Barnett added, although more research is needed to assess how much factory closures in other sectors might impact drug use or fatalities.

“This study definitely provides strong support for the idea that economic conditions and unemployment may have played a role in catalyzing the opioid crisis, particularly in the states with many closures, like Ohio, Michigan and Tennessee,” Barnett said. “It reinforces that health is not just biology and genetics – the economy, poverty, and social factors are crucial as well.”

SOURCE: JAMA Internal Medicine, online December 30, 2019.

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

U.S. vaping-related deaths rise to 55, cases of illness to 2,561

FILE PHOTO: A man uses a vape device in this illustration picture, September 19, 2019. REUTERS/Adnan Abidi/Illustration/File Photo

(Reuters) – U.S. health officials said on Tuesday one more death occurred since last week from a mysterious respiratory illness tied to vaping, taking the total toll to 55.

The Centers for Disease Control and Prevention (CDC) also reported 2,561 cases from the illness associated with use of e-cigarettes, or vaping products, as of Dec. 27.

The agency released a series of reports on Friday indicating that the outbreak of vaping-related lung injuries appears to be waning, as evidence mounts that vitamin E acetate, a cutting agent used in marijuana vape cartridges, is playing a role in the illnesses.

CDC has called vitamin E acetate as a “chemical of concern” and recommended that the substance not be added to e-cigarette, or vaping products, while the investigation is ongoing.

Reporting by Saumya Sibi Joseph in Bengaluru; Editing by Aditya Soni

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?

[embedded content]

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete.

“There’s nothing that I’ve seen in my 30-plus years studying medicine that could be as impactful and transformative” as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration ― which has approved more than 40 AI products in the past five years ― says “the potential of digital health is nothing short of revolutionary.”

Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk ― and that regulators aren’t doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma ― an error that could have led doctors to deprive asthma patients of the extra care they need.

“It’s only a matter of time before something like this leads to a serious health problem,” said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is “nearly at the peak of inflated expectations,” concluded a July report from the research company Gartner. “As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.”

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” acknowledges that many AI products are little more than hot air. “It’s a mixed bag,” he said.

(Lynne Shallcross/KHN Illustration; Getty Images)

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. “Most AI products have little evidence to support them,” Kocher said. Some risks won’t become apparent until an AI system has been used by large numbers of patients. “We’re going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data,” Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system ― which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy ― was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such “stealth research” ― described only in press releases or promotional events ― often overstates a company’s accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software “may make patients into unwitting guinea pigs,” said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as “black boxes” because even their developers don’t know how they have reached their conclusions. Given that AI is so new ― and many of its risks unknown ― the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices don’t require FDA approval.

“None of the companies that I have invested in are covered by the FDA regulations,” Kocher said.

Legislation passed by Congress in 2016 ― and championed by the tech industry ― exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

There’s been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

If failing fast means a whole bunch of people will die, I don’t think we want to fail fast. Nobody is going to be happy, including investors, if people die or are severely hurt.

Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle

“Almost none of the [AI] stuff marketed to patients really works,” said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices ― such as ones that help people count their daily steps ― need less scrutiny than ones that diagnose or treat disease.

Some software developers don’t bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. “It’s not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal,” said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academy’s report. “That’s not how the U.S. economy works.”

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

“If failing fast means a whole bunch of people will die, I don’t think we want to fail fast,” Etzioni said. “Nobody is going to be happy, including investors, if people die or are severely hurt.”

Relaxing Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market “moderate-risk” products with no clinical testing as long as they’re deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

The FDA, headquartered just outside Washington, D.C., has long focused its attention on devices that pose the greatest threat to patients.(Al Drago/CQ Roll Call via AP Images)

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed “substantially equivalent” to products marketed before 1976.

AI products cleared by the FDA today are largely “locked,” so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDA’s Center for Devices and Radiological Health. The FDA has not yet authorized “unlocked” AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDA’s pilot “pre-certification” program, launched in 2017, is designed to “reduce the time and cost of market entry for software developers,” imposing the “least burdensome” system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products “is efficient and that it fosters, not impedes, innovation.”

Under the plan, the FDA would pre-certify companies that “demonstrate a culture of quality and organizational excellence,” which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a “streamlined” review ― or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products’ safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. “We definitely don’t want patients to be hurt,” said Patel, who noted that devices cleared through pre-certification can be recalled if needed. “There are a lot of guardrails still in place.”

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. “People could be harmed because something wasn’t required to be proven accurate or safe before it is widely used.”

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

“The honor system is not a regulatory regime,” said Dr. Jesse Ehrenfeld, who chairs the physician group’s board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agency’s ability to ensure company safety reports are “accurate, timely and based on all available information.”

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products “is efficient and that it fosters, not impedes, innovation.”(Francis Ying/KHN)

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the company’s founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first “autonomous” AI product ― one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoff’s company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a person’s risk of Alzheimer’s based on their speech. Predictions were more accurate for some patients than others. “Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment,” said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New York’s Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospital’s portable chest X-rays ― taken at a patient’s bedside ― with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so it’s not surprising that these patients had a greater risk of lung infection.

While it is the job of entrepreneurs to think big and take risks, it is the job of doctors to protect their patients.

Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a “game changer.” But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients’ kidney function didn’t improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of “overdiagnosis,” in which the AI system flagged borderline kidney issues that didn’t need treatment, Jha said. Google had no comment in response to Jha’s conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patient’s kidneys might stop prescribing ibuprofen ― a generally safe pain reliever that poses a small risk to kidney function ― in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanford’s Cho said. That’s because diseases are more complex ― and the health care system far more dysfunctional ― than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often aren’t aware that they’re building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients’ medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients’ interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

“While it is the job of entrepreneurs to think big and take risks,” Saini said, “it is the job of doctors to protect their patients.”

Related Topics

Cost and Quality Health Industry Multimedia

Doctors Electronic Health Records FDA Health IT Patient Safety Study

Visit the Source Site

Powered by WPeMatico

Diet Dr Pepper does not promise weight loss or deceive consumers: U.S. appeals court

(Reuters) – A U.S. federal appeals court on Monday said the maker of Diet Dr Pepper did not deceive consumers into thinking the soft drink promoted weight loss by including “diet” in its name, a decision that could doom a similar lawsuit over Diet Coke.

FILE PHOTO: Dr Pepper soda cans for sale are pictured at a grocery store in Pasadena, California, U.S., February 14, 2018. REUTERS/Mario Anzuoni

The 9th U.S. Circuit Court of Appeals rejected California resident Shana Becerra’s claim that Dr Pepper/Seven Up, now part of Keurig Dr Pepper Inc, misled consumers, including through ads featuring physically attractive models, and that its conduct violated California consumer fraud laws.

Becerra, who claimed to have struggled with obesity since childhood, relied on scientific evidence that she said linked aspartame, an artificial sweetener used in Diet Dr Pepper, to possible weight gain.

But in a 3-0 decision for the San Francisco-based court, Circuit Judge Jay Bybee said Becerra failed to show that reasonable consumers associated diet soda with health benefits.

“No reasonable consumer would assume that Diet Dr Pepper’s use of the term ‘diet’ promises weight loss or management,” Bybee wrote. “The use of ‘diet’ in a soft drink’s brand name is understood as a relative claim about the calorie content of that soft drink compared to the same brand’s ‘regular’ (full-caloric) option.”

John Weston, one of Becerra’s lawyers, said he was disappointed with the decision.

Evan Young, a lawyer for Dr Pepper/Seven Up, in an email said people who drink Diet Dr Pepper know it is a calorie-free version of Dr Pepper.

“We appreciate that the Ninth Circuit has allowed common sense to prevail and that it rejected the attempt to complicate something that is so simple,” he added.

Monday’s decision upheld an August 2018 dismissal by U.S. District Judge William Orrick in San Francisco.

The 9th Circuit last week dismissed Becerra’s appeal in a similar case against Coca-Cola Co over Diet Coke, saying it lacked jurisdiction because the February 2018 dismissal by U.S. District Judge William Alsup in San Francisco was not final.

Alsup would be expected to apply the reasoning in Monday’s decision if Becerra pursued the Diet Coke case.

The federal appeals court in Manhattan earlier this year upheld dismissals of similar cases against Dr Pepper, Coca-Cola and PepsiCo Inc under New York consumer fraud laws.

The case is Becerra v Dr. Pepper/Seven Up Inc, 9th U.S. Circuit Court of Appeals, No. 18-16721.

Reporting by Jonathan Stempel in New York; Editing by David Gregorio

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

Eye injuries from laundry pods rising in U.S.

(Reuters Health) – A growing number of kids are getting chemicals from laundry detergent pods in their eyes, even as ocular injuries from other types of household cleaners steadily decline, a U.S. study suggests.

Nationwide, poison control centers received 319,508 calls from 2000 to 2016 about people getting household cleaning products in their eyes, an average of 18,795 calls a year, researchers report in the journal Eye.

Overall, the annual frequency of these calls declined by 29% during the study period. But annual calls related to laundry detergent pods surged 1,960% from 2012, when they first hit the market, through the end of 2016.

Despite decades of efforts to improve child-resistant packaging, kids made up the vast majority of eye injuries from things like laundry pods, dish soap, glass cleaners, bleach and disinfectants. For every 100,000 U.S. residents, 28.4 kids under age 6 got household cleaning products in their eyes each year, compared with 4.2 teens and adults.

The highest exposure was among 2-year-olds, with 62.8 cases for every 100,000 residents.

“Cleaning products may be particularly alluring to young children due to their colorful packaging and contents and unique scents,” said senior study author Dr. Gary Smith, director of the Center for Injury Research and Policy at Nationwide Children’s Hospital in Columbus, Ohio.

This is especially true of laundry pods, which come in snack-size packets and contain highly concentrated detergent that can be more harmful to kids who drink it or get it in their eyes. Eye injuries from pods surged during the study even though exposure to other types of detergent remained relatively constant, Smith said by email.

“This emphasizes the urgent need to increase efforts to prevent laundry detergent packet exposures among young children, including strengthening of the current safety standard for these packets,” Smith said.

Eye injuries from laundry pods can include redness and irritation, infections, corneal abrasions and burns, the study team notes. Eating or drinking the contents can cause seizures, coma, severe breathing impairments, and in rare cases can be fatal, previous research has found.

In the current study, bleaches were the cleaners people most often got in their eyes, accounting for about 26% of cases. Wall, floor, and tile cleaners made up another 13% of cases, followed by disinfectants at 11% and laundry detergents at 6%.

Drain cleaners, oven cleaners and dishwasher detergents were responsible for the most cases with serious health problems.

About 6% of cases with a known medical outcome caused no health effects, and another 82% caused only minor issues like redness or itchiness in the eyes. Roughly 12% of cases resulted in moderate medical issues, and only 0.2% involved severe problems.

The researchers only had data on cases voluntarily reported to poison control centers, so the results likely underestimate the number of injuries, the authors note. Another drawback is that researchers lacked data on health outcomes for many cases reported to poison control.

The results underscore that cleaning products should be kept where kids can’t get them, especially when there are toddlers and preschoolers in the home, said Dr. Lois Lee, an emergency medicine physician at Boston Children’s Hospital who wasn’t involved in the study.

“This means keeping the contents in the original container, with the child proof cap on, and the container should be either locked away or placed on a high shelf, so young children can’t access the product,” Lee said by email. “Parents should also make sure to close the container of the cleaning product as soon as they have taken out what they need.”

For emergencies, parents should save the national Poison Help Line number in cellphone contacts and tape a copy of the number to any landlines, Smith advised. (In the U.S., call 1-800-222-1222; in the UK, 111; in Australia, 131 126.)

“Call immediately if you think your child has come into contact with a household cleaning product or other dangerous product,” Smith said. “You do not need to wait for symptoms to develop to call.”

SOURCE: Eye, online December 9, 2019.

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

Samoa ends measles state of emergency as infection rate slows

FILE PHOTO: A vial of the measles, mumps, and rubella (MMR) vaccine is pictured at the International Community Health Services clinic in Seattle, Washington, U.S., March 20, 2019. Picture taken March 20, 2019. REUTERS/Lindsey Wasson/File Photo

MELBOURNE (Reuters) – The South Pacific island nation of Samoa has lifted a six week-state of emergency after the infection rate from a measles outbreak that has swept the country started to come under control.

Samoa’s island population of just 200,000 has been gripped by the highly infectious disease that has killed 81 people, most of them babies and young children, and infected more than 5,600 people.

The government said in a statement late on Saturday that the emergency orders, which included aggressive measures to contain the virus such as closing schools and restricting travel, put in place last month had ended.

Measles cases are on the rise globally, including in wealthy nations such as the United States and Germany, where some parents shun life-saving vaccines due to false theories suggesting links between childhood immunizations and autism.

Death and infection rates in Samoa started to slow in mid-December after a vaccine drive pushed immunization rates towards 95%, the level aid agencies say is effective in creating “herd immunity” that can contain the disease.

Earlier in the year, an outbreak of measles hit the New Zealand city of Auckland, a hub for travel to and from small Pacific islands.

The disease soon found a highly susceptible population in Samoa which had far lower vaccination rates than its neighbors.

Reporting by Will Ziebell in Melbourne; Editing by Robert Birsel

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

Hospital Group Mum As Members Pursue Patients With Lawsuits And Debt Collectors

The American Hospital Association, the biggest hospital trade group, says it promotes “best practices” among medical systems to treat patients more effectively and improve community health.

But the powerful association has stayed largely silent about hospitals suing thousands of patients for overdue bills, seizing homes or wages and even forcing families into bankruptcy.

Atlantic Health System, whose CEO is the AHA’s chairman, Brian Gragnolati, has sued patients for unpaid bills thousands of times this year, court records show, including a family struggling to pay bills for three children with cystic fibrosis.

AHA, which represents nearly 5,000, mostly nonprofit hospitals and medical systems, has issued few guidelines on such aggressive practices or the limited financial assistance policies that often trigger them.

In a year when multiple health systems have come under fire for suing patients, from giants UVA Health System and VCU Health to community hospitals in Oklahoma, it has made no concrete move to develop an industry standard.

“There could be a broader message coming out of hospital leadership” about harsh collections, said Erin Fuse Brown, a law professor at Georgia State University who studies hospital billing. “It seems unconscionable if they are claiming to serve the community and then saddling patients with these financial obligations that are ruinous.”

Nonprofit hospitals are required to provide “community benefit,” including charity care in return for billions of dollars in government subsidies they get through tax exemptions. But the rules are lax and vague, experts say, especially for bill forgiveness and collections.

The Affordable Care Act requires nonprofit hospitals to have a financial assistance policy for needy patients but offers no guidance about its terms.

“There is no requirement” for minimum hospital charity under federal law, said Ge Bai a health policy professor at Johns Hopkins. “You design your own policy. And you can make it extremely hard to qualify.”

Practices vary sharply, a review of hospital policies and data from IRS filings show. Some hospitals write off the entire bill for a patient from a family of four making up to $77,000 a year. Others give free care only if that family makes less than $26,000.

The law does not substantially limit harsh collections, either. IRS regulations require only that nonprofit hospitals make “reasonable efforts” to determine if patients qualify for financial assistance before suing them, garnishing their wages and putting liens on their homes.

Gaping differences in both collections and financial assistance show up in the policies of health systems represented on AHA’s board of trustees.

This year, AHA board chairman Gragnolati’s Atlantic Health System, in northern New Jersey, sued patients for unpaid bills more than 8,000 times, court records show.

Atlantic Health sued Robert and Tricia Mechan of Maywood, N.J., to recover $7,982 in unpaid bills for treatment of their son Jonathan at the system’s Morristown Medical Center.

Three of the Mechans’ four children have cystic fibrosis, a chronic lung disease, including Jonathan, 18. Tricia Mechan works two jobs — full time as a manager at Gary’s Wine & Marketplace and part time at Lowe’s — to try to pay doctor and hospital bills that pile up even with insurance.

“I have bill collectors call me all the time,” Tricia Mechan said. “You’re asking me for more, and all I’m doing is trying to get the best care for my children. I didn’t ask to have sick children.”

She closed a savings account and borrowed money to settle Jonathan’s bill for $6,000. Another son with cystic fibrosis, Matthew, owes Atlantic Health $4,200 and is paying it off at $25 a month, she said.

Marna Borgstrom, CEO of Yale New Haven Health, also sits on AHA’s board. Yale almost never sues families like the Mechans.

“I have not signed off on a legal action since 2015” against a patient, Patrick McCabe, the system’s senior vice president of finance, said in an interview. “People are coming to us when they are at their most vulnerable, and we truly believe we need to work with them and not create any additional stress that can be avoided.”

Yale has treated Nicholas Ruschmeyer, 30, a Vermont ski mountain manager, for recurring cancer. He has been careful to maintain insurance, but a few years ago the hospital performed a $12,000 genetic test that wasn’t covered.

“Yale completely absorbed the cost,” said his mother, Sherrie Ruschmeyer. Yale is “wonderful to work with, not at all aggressive,” she said.

Atlantic Health bars families from receiving financial assistance if they have more than $15,000 in savings or other assets. Yale never asks about savings. Even families who own homes without a mortgage qualify if their income is low enough.

Atlantic Health’s policies including seizing patient wages and bank accounts through court orders to recoup overdue bills. Yale says it does not do this.

In some ways, Atlantic Health’s policies are more generous than those of other systems.

It forgives bills exceeding 30% of a family’s income in many cases, the kind of “catastrophic” assistance some hospitals lack. It also bills many uninsured patients only slightly more than Medicare rates. That’s far less than rates charged by other hospitals in the same situation that are substantially higher than the cost of treatment.

“Atlantic Health System’s billing policy complies with all state and federal guidelines,” said spokesman Luke Margolis. “While we are willing to assist patients no matter their financial situation, those who can pay should do so.”

After a reporter inquired about its practices, Atlantic Health said it “is actively engaged in refining our policies to reflect our patients’ realities.”

AHA also is considering changing its position on billing in the wake of recent reports on aggressive and ruinous hospital practices.

Previously AHA said billing offices should “assist patients who cannot pay,” without giving specifics, and treat them with “dignity and respect.” Queried this month, association CEO Rick Pollack said, “We are reevaluating the guidelines [for collections and financial assistance] to ensure they best serve the needs of patients.”

Kaiser Health News found that the University of Virginia Health System sued patients 36,000 times over six years, taking tax refunds, wages and property and billing the uninsured at rates far higher than the cost of care. Richmond-based VCU Health’s physicians group sued patients 56,000 times over seven years, KHN also found.

In Memphis, Methodist Le Bonheur Healthcare sued patients for unpaid bills more than 8,000 times over five years, ProPublica reported. In South Carolina, hospitals have been taking millions in tax refunds from patients and their families, an examination by The Post and Courier showed.

In response, VCU pledged to stop suing all patients. UVA promised to “drastically” reduce lawsuits, increase financial assistance and consider further steps. Methodist erased debt for 6,500 patients and said it would overhaul its collections rules.

Yale’s less aggressive policies also came in response to journalism — a 2003 Wall Street Journal report on how the system hounded one family. Yale still sends overdue bills to collections, McCabe said. But it balks at the last, drastic step of asking a court to approve seizing income and assets.

For patients with unpaid bills, he said, “if you’re willing to play a game of chicken, you will win.”

Hospitals say they see more and more patients who can’t pay, even with insurance, as medical costs rise, family incomes plateau and out-of-pocket health expenses increase. In particular, they blame widespread high-deductible coverage, which requires patients to pay thousands before the insurance takes over.

“More consumers pay far more with fewer benefits,” Pollack said.

Some states go beyond federal rules for charity care and collections. In California, patients with an income of less than $90,000 for a family of four must be eligible for free or discounted care. New Jersey requires Atlantic Health and other systems to give free care to patients from families of four with income less than $51,000.

The National Consumer Law Center, a nonprofit advocacy group, suggests all states adopt that standard for large medical facilities. Its model medical debt law also would require substantial discounts for families of four with income below $103,000 and relief for patients with even higher incomes facing catastrophic bills.

The AHA should consider similar changes in its own guidelines, NCLC attorney Jenifer Bosco said.

“I would be interested in seeing them taking a more active role in creating some standard for hospitals about what’s too much,” she said. “What’s going too far? Given that this is a helping profession, what would be some appropriate industry standards?”

KHN senior correspondent Jordan Rau contributed to this report.

Related Topics

Health Care Costs Health Industry States

Hospitals Investigation UVA Lawsuits Virginia

Visit the Source Site

Powered by WPeMatico

Chicken pox outbreak forces migrant shelter to shutter in northern Mexico

A migrant carrying his daughter prays at the government-run Leona Vicario migrant shelter, temporarily closed by health authorities following a varicela outbreak between Central American migrants that have been sent by the U.S. government to wait out their asylum court dates in Mexico, in Ciudad Juarez, Mexico December 27, 2019. REUTERS/Jose Luis Gonzalez

CIUDAD JUAREZ, Mexico (Reuters) – An outbreak of chicken pox has forced the temporary closure of a shelter housing Central American migrants sent to Mexico from the United States, Mexican authorities said on Friday, as officials sought to contain the highly contagious virus.

The shelter in the northern city of Ciudad Juarez, across the border from El Paso, Texas, closed on Thursday after 72 people, including 69 children, were diagnosed with the virus, officials in Mexico’s Chihuahua state said in a statement.

Most people infected with chicken pox simply feel unwell — with symptoms including an itchy, blister-like rash, fever, headache and fatigue — but some develop serious complications.

The Ciudad Juarez facility, which houses nearly 800 people awaiting court dates in the United States, is part of a network of shelters in Mexico that the Trump administration has used to enforce its policy of sending mostly Central American migrants south of the border while their asylum cases are pending.

The shelter, which is run by Mexico’s federal government, did not respond to a request for comment.

A federal official said the virus was spread by a Honduran girl returned to Mexico by the U.S. government under Washington’s Migrant Protection Protocols (MPP) policy, Mexico’s El Diario newspaper reported.

Health officials in Chihuahua said the virus has been contained and fewer than 50 people remain ill. Many migrants are being vaccinated, particularly vulnerable populations such as children, the elderly, and those with chronic illnesses.

The chicken pox outbreak is the biggest such incident in Ciudad Juarez since this year’s introduction of MPP, also known as the “Remain in Mexico” policy.

Disease outbreaks such as mumps, measles and chicken pox have also occurred in migrant detention facilities in the United States.

Reporting by Julio-Cesar Chavez; Editing by Drazen Jorgic and Daniel Wallis

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

Online triage tool may help patients decide if they need immediate care

(Reuters Health) – An online tool that analyzes symptoms may help people decide whether to seek immediate care in the emergency room or to adopt a wait-and-see strategy, a new study suggests.

After analyzing data from more than 150,000 encounters between patients and the Buoy Health triage tool, researchers found that nearly a third of users concluded after using the tool that their situation was less dire and their need for care less urgent than originally assumed. In 4% of cases, patients decided their situation was more serious than they initially thought, according to the results published in JAMA Network Open.

The researchers also found a reduction in the proportion of patients who were uncertain about the seriousness of their health problem, from 34% before using the tool to 21% afterward.

The new findings show the “chatbot can impact the care patients intend to receive,” said the study’s lead author, Aaron Winn, from the school of pharmacy at the Medical College of Wisconsin, in Milwaukee.

Patients interested in learning more about their symptoms are led through a series of questions by a chatbot that is designed to home-in on possible causes. Along with the symptom questions, users are asked to provide only their age and gender.

The developers were cognizant of the dangers of having the chatbot dispense medical advice, which is why certain safeguards were programmed in, said coauthor Dr. Bradley Crotty, also from Medical College of Wisconsin. Crotty has served as an advisor to Buoy Health.

“For 75-plus conditions that are too dangerous for a person to be chatting with a computer for, Buoy automatically routes them to the ER,” Crotty said in an email. “For example, for someone coming in with crushing chest pain, Buoy immediately suggests the ER as opposed to asking more questions. These conditions are consistently being monitored by Buoy’s internal clinical team.”

Currently a version of the chatbot is sold to health insurers and self-insured employees, Crotty said. “Investors include Optum, Cigna, Humana and F-Prime,” he added.

To get a quick look at how the chatbot might be impacting people’s care-seeking intentions, the researchers combed through 158,083 encounters between the program and patients. The average patient age was 40, and 78% were women.

The most common organ system with queries from patients was the reproductive system. That was followed by general symptoms and gastrointestinal issues, the researchers found.

The most common symptom types were pain, abnormal functioning and discharge. Most patients, 47%, using the chatbot initially thought they should see a primary care physician, while 34% said they were uncertain, 9% thought they should be seeking urgent care and another 9% thought they should be heading to the emergency room.

The researchers couldn’t determine what the patients did after getting an analysis of their symptoms. “That would be something for a follow-up study,” Winn said.

Dr. Gabe Kelen was both “excited” and “concerned,” about the new tool.

“I wonder, what is the business model for the company,” said Kelen, a professor and chair of the department of emergency medicine at Johns Hopkins Medicine in Baltimore, Maryland. “Are they selling this data to somebody? I wonder if patients are unwittingly putting their health status out there on the internet.”

On the other hand, “the exciting part is this technology probably not only can’t be stopped but also if properly developed and used could be a disrupter in healthcare delivery,” said Kelen, who wasn’t involved in the study. “I really like that and think it is the future. In our department we have developed an electronic triage tool. We looked at hundreds of thousands of cases and tested it in a number of settings.”

The idea behind the Hopkins tool is to ferret out the patients with the most urgent needs for care, Kelen said.

SOURCE: JAMA Network Open, online December 27, 2019.

Our Standards:The Thomson Reuters Trust Principles.

Visit the Source Site

Powered by WPeMatico

‘An Arm And A Leg’: Tradition Grows Into $1 Million Gift For People In Medical Debt

Can’t see the audio player? Click here to listen.

Every year — for decades — the Buehler family and friends have organized a softball tournament in the Cincinnati, Ohio, area to raise money for someone with big medical expenses.

“It’s like a holiday for us in the family,” Ed Buehler, 40, said. “You know, another one that just happens to come in July.”

The tournament started in 1980 as a fundraiser for Ed’s dad, Denny Buehler, who was battling leukemia and needed to travel to Seattle for treatment. The tournament typically raises about $10,000 each year.

“I don’t want to say $10,000 is not a lot of money,” Ed Buehler said. “But life is hard, and when something’s gotten in your way, $10,000 doesn’t go really, really far.”

In 2019, inspired by RIP Medical Debt, the Denny Buehler Memorial Foundation took on a new project. The foundation decided to buy up old medical debt — at pennies on the dollar — to pay off $1 million in debt for neighbors in the Cincinnati community.

In early December, the foundation met its fundraising goal and has plans to keep going.

This story has it all: softball, beer, late-night TV host John Oliver and a punk-rock singer turned Girl Scouts mom. And victories … for scores of people who had carried old debt for years.

Season 3 is a co-production of Kaiser Health News and Public Road Productions.

To keep in touch with “An Arm and a Leg,” subscribe to the newsletter. You can also follow the show on Facebook and Twitter. And if you’ve got stories to tell about the health care system, the producers would love to hear from you.

To hear all Kaiser Health News podcasts, click here.

And subscribe to “An Arm and a Leg” on iTunes, Pocket Casts, Google Play or Spotify.

Related Topics

Cost and Quality Health Care Costs Multimedia

An Arm and a Leg Audio Podcasts

Visit the Source Site

Powered by WPeMatico