Tiếng Việt

Employers: Post Jobs | Search Resumes

Your job posting will be available on 4 big websites       Access to more than 800 thousands resumes       Your job posting will be available on 4 big websites       Access to more than 800 thousands resumes

Data Engineer - Search Platform

Job requirement

Tiki is extremely focusing on growing products to wider customer selection. This is not outside the goal of everything we have done, is to bring more happiness and convenience to our customer. But for search, we have to be honest to argue that this would be our the challenge - the more selection, the more challenges in ranking.

Fortunately, our team is constantly iterating and standing together to solve problems. We found out many solution to deal with challenges. But at the time solutions come, the complexity of data systems also grow up.

We are looking for Data Engineer to stand together with us and take responsibility for building a platform with strong architecture. And since we are just at the beginning of the road, you can let your imagination run free. We encourage everyone to dare to try new things and even make some mistakes, after all, it is all part of life and learning.

Responsibilities for Data Engineer:

  • Create and maintain optimal data pipeline architecture.
  • Automate manual processes, optimize data delivery, re-design infrastructure for greater scalability. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Google Cloud ‘big data’ technologies.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work closely with product owner, data analyst and data scientist to strive for greater functionality in our data systems.

Why you will want to work here:

  • We are constantly iterating! There is no such best proposal for anything, no fastest API, no best machine learning models. We design, build, test, ship, and optimize, and test. Just a stream of improvements and tests.
  • We have data-driven mindset, every point of changes must be tested to gain insights into its impacts on key metrics. It's a long process, but over time, we gradually learn and become confident in our approach.
  • We love "best practices". Serving important features with high throughput always give us a hitch to research and apply best practices. Any experiment or optimization is always welcomed.
  • We are both independent and open. We own our products. Technical problems would be discussed internal, but for difficult one, we could request other's help.

Job requirement

  • A minimum 2 years experience with Python (or Java) is required.
  • Working knowledge of message queue, streaming process, and scalable data stores.
  • Experienced with data pipeline and workflow management, also big data tools: Azkaban, Luigi, Airflow, Hadoop, Spark, Kafka.
  • Experienced with cloud-based system likes Google Cloud Platform / Amazon Web Services: BigQuery, DataFlow, Amazon EKS, EMR, Redshift.
  • Good at math and SQL is a big plus.


Insurance, Travel opportunities, Incentive bonus, Health checkup, Training & Development, Salary review, Laptop, Allowances, Uniform, Business Expense, Annual Leave, Sport Club