Business Data
Personal Bio

Pascal manages the Smart Content for Research department, which focus on automatically structuring and enriching content to support innovation in Elsevier products for the Research market. It includes NLP, machine learning, knowledge representation and big data experts, as well as the development of the new Elsevier All Sciences Taxonomy.  Pascal has had a long and productive career  in text and data mining. He developed TDM products for IBM and was one of the Co-Founders of Temis, a text and data mining company. While there, he served as both the CTO and R&D Manager.


Presentation title: 
Dynamic Knowledge Stores and Machine Translation
Presentation description: 
With a tradition of more than 130 years in high-quality publishing, Elsevier is currently one of the largest academic and scientific publishers in the world, and owner of some of the biggest scientific content repositories. The approximately 3.000 journal titles, 440.000 articles, 10.000 books, 1.4 million submissions per year, and 61 million abstracts indexed by Scopus explain why Elsevier is transforming and expanding from a publishing company to a data analytics giant. It is under this strategic decision that products such as ScienceDirect, Health Advance, Mendeley, SciVal, Pure, Reaxys, are being transformed and expanded into smart content systems that can deliver the major ecosystem academics and professionals across all disciplines would need to advance their careers. As a result of this transition several challenges emerge. The first challenge is the question of how we can create high-quality non-trivial Knowledge Graphs from our content. What technologies are there in the market that enable us to quickly turn our content repositories – back files of journals, titles and abstracts of articles, and all associated metadata into coherent and valuable concept and data networks? Are there any examples we can compare ourselves to. Another key question in this space is how Knowledge Graphs – in the sense of highly interconnected knowledge models cast in databases and networks that allow for search, reasoning and retrieval – can be turned into value. What business models are current, actual examples or is the semantic web just a dream without a business model? What we want to get out of this challenge is an answer to the question what the state-of-the-art technologies are in actual use to turn content into knowledge models, preferable in a publishing or scientific context; and second, what applications can be built on those technologies. The second challenge is the question of how we can deliver smart and personalized content in multiple languages? Although Machine Translation has become a commodity in the form of Google translate, actual applications to Scientific Publishing and Search, Discovery and Database applications cannot be served by approximation-translation. Our publishing standards required carefully curated translation work. Elsevier owns and maintains a sizeable number of taxonomies modelling many areas of science. While taxonomies are a solution to search, navigation and indexing challenge, we wonder how we can put them to use in a translation context. Our challenge is to find the right methods and applications for both translations of taxonomies as well as translations of major reference works. And can translate taxonomies be used for translation of books? The aim of this talk is to present in detail the technical difficulties faced in these challenges and describe some of the approaches that are currently being adopted.