Our client, created a cloud-based application who works across web-based and mobile devices
We are seeking an experienced software engineer for our Data Platform team who has a solid foundation in data engineering, creating distributed systems and hands-on approach to data governance.
This person should be flexible working with different technologies and programming languages.
Therefore we are looking for someone who enjoys working in a team and ready to help wherever necessary – no task is too small or insignificant.
An Agile mindset and the ability to thrive in a fast paced and ever-changing environment are some of the qualities that you bring. You should have experience working with large scale data and distributed systems, as well as a solid understanding of storage, replication and indexing. You should be fluent in English as there will be daily communication with other teams, and our primary language is English.
The responsibilities are:
We take full ownership of the solutions we create. Therefore you will be involved in various software engineering roles and activities.
Writing efficient distributed code and algorithms
Working with multiple database technologies and grasping their internals
Creating and designing data-warehouse for efficient data storage and querying
Automating all infrastructural components using infrastructure-as-a-code principles (Terraform, Ansible)
Troubleshooting and performance tuning
Supporting and coaching other developers to utilize full power that platform provides
The top candidate will have the following skills:
We are looking for a candidate with 3+ years of experience in a Software Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with object-oriented/object function scripting languages: Java or Net or node or SCALA etc
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Experience supporting and working with cross-functional teams in a dynamic environment.
Nice to have:
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
What We Offer
In addition to being part of our quest to help people empower their imagination, we offer:
Competitive salary and benefits
Flexible working hours
Ability to work remotely
Flexible time off
Daily lunch at the office
Health insurance (OSDE 310)
A phenomenal learning environment for you to develop)