myU OneStop


Unit's home page.

Thomas Hale-Kupiec, MJLST Staff Member

President Barack Obama proposed spending $215 million on a 'precision medicine' initiative. The largest part of the money, $130 million, would go to the National Institutes of Health in order to create a population-scale study. This study would create a database containing health information with genetic, environmental, lifestyle, medical and microbial data from both healthy and sick volunteers with the aim that it will be used to accelerate medical research and to personalize treatments to patients. Though some would call this a "bio-bank," Francis Collins, director of the National Institutes of Health, said that instead, the project is greater than that, as it is combining data from among what he called more than 200 large American health studies that are ongoing and together involve at least two million people. "Fortunately, we don't have to start from scratch," he said. "The challenge of this initiative is to link those together. It's more a distributed approach than centralized." Further, the President immediately attempted to alleviate concerns related to privacy: "We're going to make sure that protecting patient privacy is built into our efforts from Day 1. . . I'm proud we have so many patients-rights advocates with us here today. They're not going to be on the sidelines. This is not going to be an afterthought. They'll help us design this initiative from the ground up, making sure that we harness the new technologies and opportunities in a responsible way."

Three major issues seem to be implicated in this proposed database study. First, both informed consent and incidental findings seem to be problematic in this model. When ascertaining information from the American health studies, the government may be bypassing what users initially consented to when agreeing to participate in the study. Further, incidental findings and individual research results of potential health, reproductive, or personal importance to individual contributors are implicated in these studies; these aspects need to be considered in order to avoid any liability going forward, and provide participates with expectations of how their information may be used. Second, collection and retention of this information seem to be an issue. Questions on when, where, and how long this information is being held creates a vast array of privacy concerns. Further, security of this information may be implicated, as some of this data may be personal. Third, deletion or removal of this information may be an issue if the program ever becomes discontinued, or if users are allowed to opt-out. Options after closure include destroying the specimens, transferring them to another facility, or letting them sit unused in freezers. These raise a multitude of questions about what to do with specimens and when level of consent should be implicated.

Overall, this database seems to hold immeasurable potential for the future of medicine. This said, legal and ethical considerations must be considered during of this new policy's development and implementation; with this immeasurable power comes great responsibility.

Benjamin Borden, MJLST Staff Member

Last week the Federal Communications Commission (FCC) voted to approve net neutrality rules. In doing so, the FCC barred Internet Service Providers (ISPs) from blocking web traffic or letting them charge websites for priority service. Additionally, the FCC asserted authority to regulate broadband providers as public utilities. This decision was met with sharp criticism from some and applauded by others. This post, however, does not seek to dissect the benefits and drawbacks of these newly passed rules. Rather, it seeks to take a step back and use the net neutrality debate to put the overall conversation about internet use and freedom in the United States in perspective with other approaches around the world. Taking that view and comparing it to the relationship of the Chinese government with Internet use leaves striking results.

As Jyh-An Lee and Ching-Yi Liu assert in their article, Forbidden City Enclosed by the Great Firewall: The Law and Power of Internet Filtering in China, the Chinese government has built an advanced Internet filtering system meant to block any content viewed by the government as a threat to the Chinese state. Additionally, the Chinese government has used technological advances to control and ultimately advance its own ideology on the Internet. The authors argue under Lawrence Lessig's theory that the government's controlling of access to Internet content is much more effective in regulating people's online behavior compared with using the law to do so.

Based on the recent net neutrality approval, regulators in the United States will likely use both the power of open access and the power of the law to ensure that the Internet remains a place for all. The U.S. government is concerned with whether or not ISPs may charge certain websites a fee to give them priority speeds over other non-paying websites. The Chinese government, on the other hand, is seeking to make technological advances that will allow it to better censor Internet content for those living in China. This is not a suggestion that American principles relating to Internet freedom must be adopted by every nation around the globe, as the domestic politics of nations are protected by norms of sovereignty. The Internet, however, is a great equalizer of information and knowledge. And while certain nations might be arguing about the best means of delivering Internet content to its citizens, others are still attempting to severely limit what its population may see. This latter strategy will likely discourage innovation in the long run and will decrease people's trust in the Internet as a place of unfettered knowledge.

The recent ruling on net neutrality may not present the perfect answer to the question of how to regulate the Internet and broadband providers. The debate itself, however, makes clear that Internet access in the United States is focused on ensuring access for all people. Such a paradigm seems more likely to succeed in a technologically driven world compared with one that seeks to limit anti-government content. Innovation stems from a desire to do more. If the overarching policy goal is to stop content, then what incentive is there to advance?

Revisiting the Idea of a Patent Small Claims Court

|

Comi Sharif, Managing Editor

In 2009, Robert P. Greenspoon explored the idea of adjusting the patent court system to improve efficiency for the adjudication of small-scale claims. His article, Is the United States Finally Ready for a Patent Small Claims Court?, appearing in Volume 10 Issue 2 of the Minnesota Journal of Law, Science & Technology, pointed out the deterrent-like effect that high transaction costs involved with traditional patent litigation have on inventors trying to protect their intellectual property. Greenspoon argues that if patent holders are merely trying to recover small sums from infringers, the lengthy and expensive patent litigation system currently in effect often outweighs the remedies available through litigation. As a result, Greenspoon suggests the creation of a "Patent Small Claims Court" to resolve these issues. Seeing that it's been over five years since Greenspoon's article, it makes sense to reexamine this topic and identify the some of the recent developments related to the article.

In May of 2012, the USPTO and United States Copyright Office co-sponsored a roundtable discussion to consider the possible introduction of small claims courts for patent and copyright claims. A few months later, The USPTO held another forum focused solely on patent small claims proceedings. A major emphasis of these discussions was conformity of the new court with the U.S. Constitution (an issue addressed by Greenspoon in his article). In December of 2012 the USPTO published a questionnaire to seek feedback from the public on the idea of a patent small claims court. The focus of the survey involved matters relating to subject matter jurisdiction, venue, case management, appellate review, and available remedies. See this link for the official request and list of questions from the USPTO submitted in the Federal Register. The deadline for submitting responses has since passed, but the results of the survey are still unclear.

In Greenspoon's article, he addresses a few of the unsuccessful past attempts to create a small claims patent court. In 2013, the House of Representative passed a bill, which authorized further study into the idea of developing a pilot program for patent small claims procedures in certain judicial districts. See H.R. 3309, 113th Cong. (2013). Senate did not pass the bill, however, so no further progress occurred.

Overall, though there appears to be continued interest in creating a patent small claims system, it doesn't seem likely that one will be created in the near future. The idea is far from dead, though, and perhaps some of Greenspoon's proposals can still help influence a change. Stay tuned.

Steven Groschen, MJLST Staff Member

Facebook recently announced a new policy that grants users the option of appointing an executor of their account. This policy change means that an individual's Facebook account can continue to exist after the original creator has passed. Although Facebook status updates from "beyond the grave" is certainly a peculiar phenomenon, it fits nicely into the larger debate of how to handle one's digital assets after their death.

Rebecca G. Cummings, in her article The Case Against Access to Decedents' Email: Password Protection as an Exercise of the Right to Destroy, discusses some of the arguments for and against providing access to a decedent's online account. Those favoring access to a decedent's account may assert one of two rationales: (1) access eases administrative burdens for personal representatives of estates; and (2) digital accounts are merely property to be passed on to one's descendants. The response from those disagreeing with access is that the intent of the deceased should be honored above other considerations. Further they argue that if there is no clear intent from the deceased (which is not uncommon because many Americans die without wills), then the presumption should be that the decedent's online accounts were intended to remain private.

Email and other online accounts (e.g. Facebook, Twitter, dating profiles) present novel problems for property rights of the deceased. Historically, a diary or the occasional love letter were among the most intimate property that could be transferred to one's descendants. The vast catalogs of information available in an email account drastically changes what is available to be passed on. In contrast to a diary, an email account contains far more than the highlights of an individual's day -- emails provide a detailed account of an individual's daily tasks and communications. Interestingly, this in-depth cataloging of daily activities has led some to the argument that information should be passed on as a way of creating a historical archive. There is certainly historical value in preserving an individual's social media or email accounts, however, it must be balanced against the potential invasion of his or her privacy.

As of June 2013, seven states have passed laws that explicitly govern digital assets after death. However, the latest development in this area is the Uniform Fiduciary Access to Digital Access Act, which was created by the Uniform Law Commission. This act attempts to create consistency among the various states on how digital assets are handled after an individual's death. Presently, the act is being considered for enactment in fourteen states. The act grants fiduciaries in certain instances the "same right to access those [digital] assets as the account holder, but only for the limited purpose of carrying out their fiduciary duties." Whether or not this act will satisfy both parties in this debate remains to be seen.

Neal Rasmussen, MJLST Staff Member

In "Notes from Underground: Hydraulic Fracturing in the Marcellus Shale" from Volume 12, Issue 2 of the Minnesota Journal of Law, Science & Technology, Joseph Dammel discussed the then current state of hydraulic fracturing ("fracking") and offered various "proposals that protect public concerns and bolster private interests." Since publication of this Note in 2011, there have been major changes in the hydraulic fracturing industry as more states and cities begin to question if the reward is worth the risk.

Since 2011, required disclosures of the fluids used in fracking have become effective in fourteen additional states, increasing the overall number of states that require disclosures to twenty. While required disclosures have alleviated some concerns, many believe this is not enough and have pushed to ban fracking outright. Vermont was the first state to do so in 2012. Although progressive, the ban was more symbolic as Vermont contains no major natural gas deposits. However, in late 2014 New York governor Andrew Cuomo made a landmark decision by announcing that fracking would be banned within New York State. Many cities have begun to pass bans as well, including Denton Texas, right in the heart of oil and natural gas country. Citing concerns about the potential health risks associated with the activity, Florida could be the next state to join the anti-fracking movement. In late 2014, two Florida senators introduced a bill that sought to ban all fracking activities and a state representative introduced a similar bill in the beginning of 2015.

The bans have not been without controversy. The fracking industry has challenged many of the local bans arguing the bans are pre-empted by state laws and exceed the cities authority. After Denton passed its local ban, the Texas Oil & Gas Association filed an injunction arguing the city did not have authority to implement such a ban. It yet to be seen if the injunction will be successful but if the results in Colorado are any indication, where local fracking bans have been overturned due to state preemption, the fracking industry should be confident. Until or unless there is a major federal decision on fracking regulations, the fracking industry will be required to juggle the various state and local regulations, which are becoming less friendly as fracking becomes more controversial nationwide.


Privacy in the Workplace and Wearable Technology

|

Jessica Ford, MJLST Staff Member

Lisa M. Durham Taylor's article, The Times They Are a-Changin': Shifting Norms and Employee Privacy in the Technological Era, in Volume 15 Issue 2 of the Minnesota Journal of Law, Science & Technology discusses employee workplace privacy rights in regard to new technologies. Taylor spends much of the article focusing on privacy concerns surrounding correspondence in the workplace. Taylor states that in certain cases, employees may be able to expect their personal email account correspondence to be private as seen in the 2008 case Pure Bower Boot Camp, Inc. v. Warrior Fitness Boot Camp, LLC. However, generally employers can legally monitor email messages and any websites an employee visits, including personal accounts.

Since Taylor's article, new technologies have emerged, bringing new privacy implications for the workplace with them. Wearable technologies such as Google Glass, smart watches, and fitness bands find themselves in a legal void, particularly in regard to privacy concerns. Several workplaces have implemented Google Glass through Google's Glass at Work program. While this could help productivity, especially in medical settings, it could also mean that an employer could review every recorded moment, even those containing personal conversations or experiences.

Smart watches could also have a troubling future due to the lack of legal boundaries. At the moment, it would be simple for a company to require employees to wear GPS-enabled smart watches and use the watches to track employees' locations, see if an employee is exceeding his break time, and instantaneously communicate with employees. Such uses could be frustrating, if not invasive. All messages and activities also could be tracked outside of the office, essentially eliminating any semblance of personal privacy. Additionally, as Taylor notes in her article, there is case precedent upholding a "public employer's search of text messages sent from and received on the employee's employer-issued paging device." This 2010 case, City of Ontario v. Quon, further allowed the employer to search personal messages.

For the moment, it appears that employers are erring on the side of caution. It will take some time to see whether the legal framework Taylor discusses will be applied to wearable technologies and whether it will be more permissive or restrictive for employers.

Could Changes for NEPA be on the Horizon

|

Allison Kvien, MJLST Staff Member

The National Environmental Policy Act (NEPA) was one of the first broad, national environmental protection statutes ever written. NEPA's aim is to ensure that agencies give proper consideration to the environment prior to taking any major federal action that significantly affects the environment. NEPA requires agencies to prepare Environmental Impact Statements (EISs) and Environmental Assessments (EAs) for these projects. NEPA is often criticized for its inability to be effective in the courts for environmental plaintiffs looking for review of federal agency actions. Environmental petitioners who have brought NEPA issues before the Supreme Court have never won.
The Court has never reversed a lower court ruling on the ground that the lower court failed to apply NEPA with sufficient rigor. Indeed, as described at the outset, the Court has not even once granted review to consider the possibility that a lower court erred in that direction and then heard the case on the merits. The Court has instead reviewed cases only when NEPA plaintiffs won below, and then the Court has reversed, typically unanimously.
Because environmental plaintiffs have never won before the Supreme Court on a NEPA issue, many view the statute as a weak tool and have wanted to strengthen or overhaul NEPA.
According to a recent report from the Environmental Law Reporter, President Obama is now "leaning on NEPA" for the work he hopes to accomplish in improving the permitting process for infrastructure development, but it does not look like he is working to improve NEPA itself,
The president's initiative has identified a number of permitting improvements, but it does not include a serious effort to force multiple agencies to align their permitting processes. A key to forcing multiple agencies to work together on project reviews and approvals is found in an unlikely place: NEPA. The statute is overdue for a makeover that will strengthen how it identifies and analyzes environmental impacts for federal decisionmakers. In doing so, it can provide the framework that will require multiple agencies to act as one when reviewing large projects.
Though Obama's proposal may not address improvements for NEPA itself, could it help those who have long wished to give NEPA an overhaul? This is not the first time in the last couple years that the President has talked about using NEPA. In March 2013, Bloomberg released news that Obama was, "preparing to tell all federal agencies for the first time that they should consider the impact on global warming before approving major projects, from pipelines to highways." With NEPA being key to some of President Obama's initiatives, could there be more political capital to address some changes for NEPA that have been long-wanted? There might be some hope for NEPA just yet.

Sen "Alex" Wang, MJLST Staff Member

In Crawford v. Washington, the Supreme Court, in a unanimous decision, overruled its earlier decision in Ohio v. Roberts by rejecting the admission of the out-of-court testimony due to its nature as "testimonial" evidence. However, it was not clear if the constitutional right of confrontation only applied to traditional witnesses (like the statement in Crawford) or if it also applied to scientific evidence and experts. Subsequently, the Court clarified this point in Melendez-Diaz v. Massachusetts and Bullcoming v. New Mexico, where the Court upheld the confrontation right of the defendants to cross-examine the analysts who performed the scientific tests. However, compare to traditional testimony from eyewitnesses, scientific evidence (e.g., blood alcohol measurement, field breathalyzer, genetic testing) is a relatively new development in criminal law. The advancement of modern technologies creates a new question, namely whether this evidence would be sufficiently reliable to avoid triggering the Confrontation Clause.

This question is discussed in a student note & comment titled The Admission of Scientific Evidence in a Post-Crawford World in Volume 14, Issue 2 of the Minnesota Journal of Law, Science & Technology. The author Eric Nielson pointed out that the ongoing dispute in the Court about requiring analysts to testify before admitting scientific findings missed the mark. Specifically, scientific evidence, especially the result of an analytical test is an objective, not subjective, determination. In the courtroom, testimony of a scientific witness is mainly based on review of the content of the witness's report, not his memories. Thus, according to the author, though Justice Scalia's boldly statements in Crawford that "reliability is an amorphous, if not entirely subjective, concept[,]" may be right in the context of traditional witness, it is clearly wrong in the realm of science where reliability is a measurable quantity. In particular, the author suggested that scientific evidence should be admitted under the standard articulated by the Court in Daubert v. Dow.

As emphasized by the author, a well-drafted, technical report should answer all of the questions that would be asked of the analyst. Given that there is currently no national or widely-accepted set of standards for forensic science written reports or testimony, the author proposed the following key components to be included in a scientific report conforming to the Daubert standard: 1) sample identifier, including any identifier(s) assigned to the sample during analysis; 2) documentation of sample receipt and chain of custody; 3) analyst's name; 4) analyst's credentials; 5) evidence of analyst's certification or qualification to perform the specific test; 6) laboratory's certification; 7) testing method, either referencing an established standard (e.g., ASTM E2224 - 10 Standard Guide for Forensic Analysis of Fibers by Infrared Spectroscopy) or a copy of the method if it is not publicly available; 8) evidence of the effectiveness and reliability of the method, either from peer reviewed journals, method certification, or internal validation testing; 9) results of testing, including the results of all standards or controls run as part of the testing; 10) copies of all results, figures, graphs, etc; 11) copy of the calibration log or certificate for any equipment used; 12) any observations, deviations, and variances, or an affirmative statement that none were observed; 13) analyst's statement that all this information is true, correct, and complete to the best of their knowledge; 14) analyst's statement that the information is consistent with various hearsay exceptions; 15) evidence of second-party review, generally a supervisor or qualified peer; 16) posting a copy to a publicly maintained database; 17) notifying the authorizing entity via email of the completion of the work and the location of the posting.

Per the author, because scientific evidence is especially probative, the current refusal to demand evidence of reliability, method validation, and scientific consensus has allowed shoddy work and practices to impersonate dependable science in the courts. This is an injustice to the innocent and the guilty alike.

Dan Keith, MJLST Staff Member

In May of 2010, the DOW Jones plummeted to Depression levels and recovered within a half an hour. The disturbing part? No one knew why.

An investigation by the Securities Exchange Commission (SEC) and the Commodity Futures trading Commission (CTFC) determined that, in complicated terms, the Flash Crash involved "a rapid automated sale of 75,000 E-mini S&P 500 June 2010 stock index futures contracts (worth about $4.1 billion) over an extremely short time period created a large order imbalance that overwhelmed the small risk-bearing capacity of financial intermediaries--that is, the high-frequency traders and market makers." After about 10 minutes of purchasing the E-mini, High Frequency Traders (HFTs) began selling this same instrument rapidly to deplete its own reserves which had overflowed. This unloading came at a time when liquidity was already low, meaning this rapid and aggressive selling increased the downward spiral. As a result of this volatility and overflowing inventory of the E-mini, HFTs were passing contracts back in forth in a game of financial "hot potato."

In simpler terms, on this day in May of 2010, a number of HFT algorithms had "glitched", generating a feedback loop that caused stock prices to spiral and skyrocket.

This event put High Frequency Trading on the map, for both the public and regulators. The SEC and the CTFC have responded with significant legislation meant to curb the mechanistic risks that left the stock market vulnerable in the spring of 2010. Those regulations include new reporting systems like the Consolidated Audit Trail (CAT) that is supposed to allow regulators to track HFT activity by the data it produces as it comes in. Furthermore, Regulation Systems Compliance Integrity (Reg SCI), a regulation still being negotiated into its final form, would require that HFTs and other eligible financial groups "carefully design, develop, test, maintain, and surveil systems that are integral to their operations. Such market participants would be required to ensure their core technology meets certain standards, conduct business continuity testing, and provide certain notifications in the event of systems disruptions and other events."

While these regulations are appropriate for the mechanistic failures of HFT activity, regulators have largely overlooked an aspect of High Frequency Trading that deserves more attention--nefarious, manipulative HFT practices. These come in the form of either "human decisions" or "nefarious" mechanisms built into the algorithms that animate High Frequency Trading. "Spoofing", "smoking", or "stuffing"--there are different names, with small variations, but each of these activities involves a form of making large orders for stock and quickly cancelling or withdrawing those orders in order to create false market data.

Regulators have responded with "deterrent"-style legislation that outlaws this type of activity. Regulators and lawmakers have yet, however, to introduce regulations that would truly "prevent" as opposed to simply "deter" these types of activities. Plans for truly preventative regulations can be modeled on current practices and existing regulations. A regulation of this kind only requires the right framework to make it truly effective as a preventative measure, stopping "Flash Crash" type events before they can occur.

Catherine Cumming, MJLST Staff Member

Though the Deepwater Horizon spill occurred nearly five years ago, the civil trial over disaster's environmental and economic effects continues. This past week, the U.S. government continued to build its case against BP, arguing that BP should pay the maximum Clean Water Act penalty of $13.7 billion. The Federal prosecutor brought in expert witnesses to describe the spill's devastating environmental and economic effects on the Gulf. In addition to arguing that BP deserves to pay the $13.7 billion penalty, the Federal prosecutors believe that BP can pay this fine. To support its argument, the U.S. government brought in financial expert Ian Ratner to testify that BP is financially able to pay the Clean Water Act Penalty. While BP is fighting for a lower penalty of approximately $3.19 billion, the statutory minimum, Ratner's financial analysis supports a higher penalty. In fact, BP's assets have increased since the 2010 spill. As of June 30, 2014, BP's assets totaled $315 billion, "up from the $236 billion the year before the spill."

On Monday, January 26, the trail resumes and BP begins calling its witnesses. It is likely that BP will continue to argue, "that the court should consider BP XP and its resources, rather than those of the larger parent group [BP], when determining a penalty. The smaller drilling subsidiary [BP XP] is the named defendant in the case." Anadarko, a co-owner of the failed oil well, argues "it had no role in the operation of the well and should not have to pay anything." The briefs are expected to be filed in April with a ruling from U.S. District Judge Carl Barbier to follow.

As the trial progresses and Deepwater Horizon spill nears its five year anniversary, readers should look at The BP Blowout and the Social and Environmental Erosion of the Louisiana Coast, which discusses the troubles the Gulf and its communities faced before the spill as well as how the spill exacerbated these issues. Daniel A. Farber believes that the situation of the Gulf is a preview of future problems that the United States and world will face in years to come. Farber writes "there are many small initiatives that can cumulatively begin to make inroads on the Gulf's problems, including, most obviously, efforts to ensure that the BP oil spill is not followed by similar disasters." Though MJLST published this article in 2012, Farber's analysis and proposal are pertinent in today's environmental and economic discussions, particularly those related to the legislature's actions regarding the Keystone XL Pipeline.