Patentability and PHOSITA in the AI Era—A Japanese Perspective

Author(s):  
Ichiro Nakayama

Although it may not be clear whether AI may generate the invention autonomously without human intervention, recent development of AI produces inventions of AI technologies such as machine learning (deep learning). Inventors also have begun to use AI as a tool to help them create inventions. These AI-assisted inventions raise the urgent and practical issues of patentability such as patentable subject matter (patent eligibility), disclosure requirements, and inventive step (non-obviousness). The Japanese Patent Office (JPO) updated the Examination Handbook to address some of the issues. For instance, they discussed to what extent inventors should disclose in patent applications because AI as a black box does not explain how the problems are solved. However, the JPO did not pay much attention to the possibilities that not only inventors but also a person having ordinary skills in the art (PHOSITA) might use AI and PHOSITA with the aid of AI could create the inventions more easily, thereby raising the level of inventive step. This chapter critically reviews the JPO’s updated Handbook and discusses whether and how we can take into account the use of AI by PHOSITA in examining inventive step.

Author(s):  
Gaétan de Rassenfosse ◽  
William E Griffiths ◽  
Adam B Jaffe ◽  
Elizabeth Webster

Abstract A low-quality patent system threatens to slow the pace of technological progress. Concerns about low patent quality are supported by estimates from litigation studies suggesting that most US patents granted should not have been issued. We propose a new model for measuring patent quality, based on equivalent patent applications submitted to multiple offices. Our method allows us to distinguish whether low-quality patents are issued because an office implements a low standard or because it violates its own standard. The results suggest that quality in patent systems is higher than previously thought. Specifically, the percentage of granted patents that are below each office’s own standard is under 10% for all offices. The Japanese patent office has a higher percentage of granted patents below its own standard than those from Europe, the USA, Korea, and China. This result arises from the fact that Japan has a higher standard than other offices. (JEL O34, K2, L4, F42)


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


Author(s):  
Charles-Henry Bertrand Van Ouytsel ◽  
Olivier Bronchain ◽  
Gaëtan Cassiers ◽  
François-Xavier Standaert

Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2012 ◽  
Vol 17 (3) ◽  
pp. 562-615 ◽  
Author(s):  
K. Foroughi ◽  
C. R. Barnard ◽  
R.W. Bennett ◽  
D. K. Clay ◽  
E. L. Conway ◽  
...  

AbstractInsurance accounting has for many years proved a challenging topic for standard setters, preparers and users, often described as a “black box”. Will recent developments, in particular the July 2010 Insurance Contracts Exposure Draft, herald a new era?This paper reviews these developments, setting out key issues and implications. It concentrates on issues relevant to life insurers, although much of the content is also relevant to non-life insurers.The paper compares certain IFRS and Solvency II developments, recognising that UK insurers face challenges in implementing new financial and regulatory reporting requirements in similar timeframes. The paper considers resulting external disclosure requirements and a possible future role for supplementary information.


2016 ◽  
Author(s):  
Mark Lemley

At the time patent applications are reviewed, the Patent and TrademarkOffice has no way of identifying the small number of applications that arelikely to end up having real economic significance. Thus patentapplications are for the most part treated alike, with every applicationgetting the same - and by necessity sparse - review. In this short magazinepiece, we urge in response three basic reforms. First, we would weaken thepresumption of validity that today attaches to all issued patents. Themodern strong presumption simply does not reflect the reality of patentreview; presumptions, in short, should be earned. Second, becauselegitimate inventors need as much certainty as the law can provide, wewould give applicants the option of earning a presumption of validity bypaying for a thorough examination of their inventions. Put differently,applicants should be allowed to "gold-plate" their patents by paying forthe kind of searching review that would merit a strong presumption ofvalidity. Third and finally, because competitors also have usefulinformation about which patents worry them and which do not, we supportinstituting a post-grant opposition system, a process by which partiesother than the applicant would have the opportunity to request and fund athorough examination of a recently issued patent. As we explain in thepiece, these reforms would together allow the Patent Office to focus itsresources on patents that might actually matter, and it would also bothreduce the incentive to file patents of questionable validity and reducethe harm caused by such patents in any event.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Enrico Favaro ◽  
Roberta Lazzarin ◽  
Daniela Cremasco ◽  
Erika Pierobon ◽  
Marta Guizzo ◽  
...  

Abstract Background and Aims The modern development of the black box approach in clinical nephrology is inconceivable without a logical theory of renal function and a comprehension of anatomical architecture of the kidney, in health and disease: this is the undisputed contribution offered by Malpighi, Oliver and Trueta starting from the seventeenth century. The machine learning model for the prediction of acute kidney injury, progression of renal failure and tubulointerstitial nephritis is a good example of how different knowledge about kidney are an indispensable tool for the interpretation of model itself. Method Historical data were collected from literature, textbooks, encyclopedias, scientific periodicals and laboratory experimental data concerning these three authors. Results The Italian Marcello Malpighi (1628-1694), born in Crevalcore near Bologna, was Professor of anatomy at Bologna, Pisa and Messina. The historic description of the pulmonary capillaries was made in his second epistle to Borelli published in 1661 and intitled De pulmonibus, by means of the frog as “the microscope of nature” (Fig. 1). It is the first description of capillaries in any circulation. William Harvey in De motu cordis in 1628 (year of publication the same of date of birth of Italian anatomist!) could not see the capillary vessels. This thriumphant discovery will serve for the next reconnaissance of characteristic renal rete mirabile.in the corpuscle of Malpighi, lying within the capsule of Bowman. Jean Redman Oliver (1889-1976), a pathologist born and raised in Northern California, was able to bridge the gap between the nephron and collecting system through meticulous dissections, hand drawn illustrations and experiments which underpin our current understanding of renal anatomy and physiology. In the skillful lecture “When is the kidney not a kidney?” (1949) Oliver summarizes his far-sighted vision on renal physiology and disease in the following sentence: the Kidney in health, if you will, but the Nephrons in disease. Because, the “nephron” like the “kidney” is an abstraction that must be qualified in terms of its various parts, its cellular components and the molecular mechanisms involved in each discrete activity (Fig. 2). The Catalan surgeon Josep Trueta I Raspall (1897-1977) was born in the Poblenou neighborhood of Barcelona. His impact of pioneering and visionary contribution to the changes in renal circulation for the pathogenesis of acute kidney injury was pivotal for history of renal physiology. “The kidney has two potential circulatory circulations. Blood may pass either almost exclusively through one or other of two pathways, or to a varying degree through both”. (Studies of the Renal Circulation, published in 1947). Now this diversion of blood from cortex to the less resistant medullary circulation is known with the eponym Trueta shunt. Conclusion The black box approach to the kidney diseases should be considered by practitioners as a further tool to help to inform model update in many clinical setting. The number of machine learning clinical prediction models being published is rising, as new fields of application are being explored in medicine (Fig. 3). A challenge in the clinical nephrology is to explore the “kidney machine” during each therapeutic diagnostic procedure. Always, the intriguing relationship between the set of nephrological syndromes and kidney diseases cannot disregard the precious notions the specific organization of kidney microcirculation, fruit of many scientific contributions of the work by Malpighi, Oliver and Trueta (Fig. 3).


Sign in / Sign up

Export Citation Format

Share Document