Work Schedule
Standard (Mon-Fri)Environmental Conditions
OfficeJob Description
When you’re part of Thermo Fisher Scientific, you’ll do challenging work, and join a team that values performance, quality and innovation. As part of a successful, growing global organization you will be encouraged to perform at your best. With revenues of more than $40 billion and the largest investment in R&D in the industry, we give our people the resources and chances to create significant contributions to the world.
Our groupAt Physical Failure Analysis group (PFA), we develop and maintain the software for automation of operation of Scanning Electron Microscopes. Our solution is crucial for semiconductor industry to identify errors in the production of semiconductor components and advance their research and production, to make them more robust, reliable, energy efficient.
As a member of an international team of scientists, software developers, and engineers, you'll support development of application software that will allow to automate the instrument. Insights and recommendations you provide will improve solution robustness, speed up research of new technology or algorithms and help the team with making better research decisions.
Daily ChallengesWe are looking for a skilled engineer who can help us build a data-driven environment to improve our product. We also want to provide our customers with relevant data and help them understand process results.
Typical tasks will be:
Designing and building infrastructure for internal data-driven decisionsBuild tools for data collection from various sources in the microscopy ecosystemDeveloping customer-facing on-premise solutions for maintaining data platformClose collaboration with both data analysts and process developersKnowledge, Skills, AbilitiesUniversity degree in Computer Science or relatedBroad professional software engineering experienceProfound, demonstrable knowledge of the Python ecosystemKnowledge of application interfaces (REST, SOAP, GraphQL, etc.)Unix/shell scriptingDatabase fundamentals (normal forms, ER model)Proficient in English languagePassion for bringing new insights and technologiesAbility to work in a multi-functional team and willingness to understand the wider context of dataThe following skills are nice to have:
Experience with cloud-based services (AWS)Knowledge of IaC framework (Terraform, CDK)Basic principles of distributed "large-scale" systems (Apache Hadoop, Spark, Flink, etc.)Knowledge of automation/configuration frameworks (Ansible, Chef, Puppet)Technology we use Databases: PostgreSQL, Elasticsearch, MongoDB, CockroachDBObservability: OpenTelemetry, ELK Stack (Elastic+Logstash+Kibana), EFK Stack (Elastic+FluenTD+Kibana)Visualization: Kibana, Superset, Python (Plotly)Apache NifiKafka, RabbitMQ