Open Evidence is a growing combining behavioural analysis and big data analytics for both public sector and private sector clients in domains such as health, digitisation, customer care, consumers policy and many more. We are expanding our operations and we are looking for a data scientist that can lead our team of junior data scientists and data engineers. He/she should be experienced (age bracket 37-40) both in managing people and managing clients, and also someone who can mentor and make our junior professionals grow. He/she should be a data scientist with knowledge and understanding of data engineering and data architecture issues, as to be able to supervise both junior data scientists and junior data engineers. Our team is based in both Milan and Barcelona, but the position is for the Milan office. He/she should be ready to travel periodically to Barcelona.
Job responsibilities as a data scientist
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions
- Mine and analyze data from databases to drive optimization and improvement of product development, marketing techniques and business strategies
- Assess the effectiveness and accuracy of new data sources and data gathering techniques
- Develop custom data models and algorithms to apply to data sets
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes
- Coordinate with different functional teams to implement models and monitor outcomes
- Develop processes and tools to monitor and analyze model performance and data accuracy
Job responsibilities supervising data engineers involve planning, steering, and monitoring
- Full life cycle analysis of data sets, including implementing data acquisition, cleansing, transformation and upload activities
- Creation and maintenance of optimal data pipeline architecture
- Assemblage of large, complex data sets that meet functional / non-functional business requirements
- Identification, design, and implementation of internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Building of the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using ‘big data’ technologies.
Qualifications and requirements
- A drive to learn and master new technologies and techniques.
- Computer Sciences, Physics, Engineering, Mathematics or another quantitative field familiar with the software/tools listed below.
- Knowledge and experience in statistical, data mining and advanced machine learning algorithms and statistics techniques using statistical computer languages and libraries: R, Python, etc (sklearn, NLTK, pandas, etc).
- Knowledge and experience in SQL and noSQL (e.g. MongoDB, ElasticSearch) databases.
- Knowledge and experience with ETL tools (including Pentaho Data integration).
- Knowledge and experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, etc.
- Knowledge and experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
- Knowledge and experience with Linux at user level.
- Availability to travel.
- Work permit in EU.
Client oriented, capacity to mentor and manage junior professionals, initiative, analytical thinking and problem-solving skills.
- Based in Milano.
- Indefinite contract.
- Full-time job with time flexibility.
- Competitive salary package + bonus + career path
Interested candidates, please send your cv to the following email address: firstname.lastname@example.org