We are looking for a Big Data Engineer to work with large data volumes read from scattered information sources in an organization’s technology infrastructure.
You bring to Applaudo the following competencies:
- 2+ years of experience with Scala and Spark
- 3+ years of data delivery, ETL (extract, transform and load) and data warehouse design, analysis, and programming experience.
- Experience with Apache Hive and Apache Hudi.
- Experience with Google Cloud Platform (Big Query).
- Experience with an excellent grasp of relational and dimensional data modeling.
- Strong mathematical, statistical, and analytics skills.
- 1+ year of Agile experience.
- English is required, as you will work directly with US-based clients.
You will be accountable for the following responsibilities:
- Extracting data from different data sources and transferring it into a data warehouse environment.
- Designing, maintaining, and implementing transactional and analytical data storage structures.
- Design, build and maintain data pipelines, consuming for multiple sources, and servicing multiple tenants.
- Elaborate informative, expressive, and meaningful reports that support business decision-making processes through the information provided.
- Reporting and subsequently translating the emanating results into good technical and consistent data designs.
- Work schedule from 9:30am until 7:30pm India Standard Time
Technical Skills:
2+ years of experience with Scala and Spark (Apache Hive and Apache Hudi).
Experience with Google Cloud Platform (Big Query).