Tiếng Việt

Employers: Post Jobs | Search Resumes

Your job posting will be available on 4 big websites       Access to more than 800 thousands resumes       Your job posting will be available on 4 big websites       Access to more than 800 thousands resumes

Senior Platform Engineer, Data Services

Job requirement

Get to know our Team: 
GrabPay (digital payment wallet for SEA) and GFSA – Grab Financial Services Asia is a recent addition to Grab’s array of product and service offerings focused on the extension of Microcredit to drivers, agents, and merchants in Grab’s ecosystem. 

GFSA team is a combination of a strong talent pool and deep local market operators across its focus markets. We are incredibly excited about the opportunity ahead of us.  
We are looking to put together the best possible combination of business build drive, industry expertise, and local market depth as part of our team. 

GFSA team is responsible for end to end conceptualization, design, development, execution and ongoing management of all lending activities in its focus markets and segments. 
​ 
Get to know the Role: 
As the Platform Engineer, Data Services, you will be working on all aspects of Data, from Platform and Infra build out to pipeline engineering and writing tooling/services for augmenting and fronting the core platform. You will be responsible for building and maintaining the state-of-the-art data Life Cycle management platform, including acquisition, storage, processing and consumption channels. The team works closely with Data scientists, Product Managers, Legal, Compliance and business stakeholders across the SEA in understanding and tailoring the offerings to their needs. As a member of the Data Services, GrabPay, you will be an early adopter and contributor to various open source big data technologies and you are encouraged to think out of the box and have fun exploring the latest patterns and designs in the fields of Software and Data Engineering. 

The day-to-day activities: ​ 
- Build and manage Grab’s largest data asset using some of the most scalable and resilient open source big data technologies like Airflow, Spark, Apache Atlas, Kafka, Yarn, HDFS, ElasticSearch, Presto/Dremio, HDP, Visualization layer and more. 
- Design and deliver the next-gen data lifecycle management suite of tools/frameworks, including ingestion and consumption on the top of the data lake to support real-time,  API-based and serverless use-cases, along with batch (mini/micro) as relevant 
- Build and expose metadata catalog for the Data Lake for easy exploration, profiling as well as lineage requirements 
- Enable Data Science teams to test and productionize various ML models, including propensity, risk and fraud models to better understand, serve and protect our customers 

Lead technical discussions across the organization through collaboration, including running RFC and architecture review sessions, tech talks on new technologies as well as retrospectives 
- Apply core software engineering and design concepts in creating operational as well as strategic technical roadmaps for business problems that are vague/not fully understood  
- Obsess security by ensuring all the components, from a platform, frameworks to the applications are fully secure and are compliant by the group’s infosec policies.  

The must haves: 
- At least 5+ years of relevant experience in developing scalable, secured, fault tolerant, resilient & mission-critical Big Data platform. 
- Able to maintain and monitor the ecosystem with 99.9999% availability 
Candidates will be aligned appropriately within the organization depending on experience and depth of knowledge. 
- Must have sound understanding for all Big Data components & Administration Fundamentals. Hands-on in building a complete data platform using various open source technologies. 
- Must have good fundamental hands-on knowledge of Linux and building big data stack on top of AWS/Azure using Kubernetes. 
- Strong understanding of big data and related technologies like HDFS, Spark, Presto, Airflow, apache atlas etc. 
- Good knowledge of Complex Event Processing (CEP) systems like Spark Streaming, Kafka, Apache Flink, Beam etc. 
- Experience with NoSQL databases – KV/Document/Graph and similar 
Proven Ability to contribute to open source community and up-to-date with the latest trends in the Big Data Space. 
- Candidate able to drive devops best practice like CI/CD, containerization, blue-green deployments, 12-factor apps, secrets management etc in Data ecosystem. 
- Able to develop an agile platform with auto scale capability up & down as well vertically and horizontally. 
- Must be in a position to create monitoring ecosystem for all the components in use in the data ecosystem. 
- Proficiency in at least one of the programming languages - Java, Scala, Python or Go along with a fair understanding of runtime complexities. 
- Must have the knowledge to build Data metadata, lineage and discoverability from scratch. 
“Educated” on latest developments in the areas of Good understanding on Machine Learning models and efficiently support them is a plus.