Data Engineer (Data Pipeline Architecture, Optimisation)
On behalf of our Client who is in the Pharmaceutical industry, we are sourcing for a Data Engineer to join their growing team of data analytics team to expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
The ideal candidate is an experienced data pipeline person who enjoys optimizing data systems and building them from the ground up. He/She may even re-designing our company’s data architecture to support our next generation of products and data initiatives.
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Apply architecture and system design theories and principles, perform complex work in research, design and development of new or existing products, tools and processes required for the operation, maintenance and testing of products.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS Azure ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Embed DevOps processes in delivery – ensure usability is front of mind in Tech Product development.
- Keep our data separated and secure across national boundaries through multiple data centers and Azure platforms.
- Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
- More than 5 years of experience in a Data Engineering
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- MUST HAVE: Experience building and optimizing ‘big data’ data pipelines, data architectures and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience with big data tools: Hadoop, Hive, Kafka, ELK (ECE).
- Experience with relational SQL and NoSQL databases
- Experience with mission-critical production development in following languages and frameworks a plus
- Languages: Java, Python, Web (HTML5, CSS, JS, XML, TS), Golang, VBA
- Frameworks: Angular, .NET, SciKitLearn, SciPy, NumPy.
- Experience with data pipeline and workflow management tools: Airflow, Apache NiFi, Redis, Microsoft Office, Sharepoint.
- Experience with Azure cloud services
- Experience with stream-processing systems: Storm, Spark.
- Experience in Microsoft PowerBI implementation (Custom components, DAX, M)
**NOTE: Due to the current COVID19 situation in Singapore, we regret that we WILL NOT be able to conside job applications from overseas.
Please send your updated CV in MS Words format to Christopher Wong at [Click Here to Email Your Resume].
We regret that only shortlisted candidates will be notified.
GMP Technologies (S) Pte Ltd | EA Licence: 11C3793 | EA Personnel: Christopher Wong | Registration No: R1104673
GMP TECHNOLOGIES (S) PTE LTD