You are someone who wants to influence your own development. You’re looking for a company where you have the opportunity to pursue your interests and be able to grow professionally.
You bring the following competencies:
• Bachelor’s degree or higher in computer science, computer engineering, or related field.
• 5+ years of overall professional experience with 3+ years in data science, doing exploratory data analysis, testing hypotheses, and building prescriptive & predictive ML models.
• Experience with source code control systems like Git, Bitbucket.
• Experience with data visualization and business intelligence tools like PowerBI
• General understanding of classic ML algorithms, model evaluation, and
• Proficiency in programming languages such as Python, with Scikit Learn.
• Containerization and image optimization (Docker)
• Experience with cloud platforms like AWS, particularly with Lambda Functions and ECS.
• Understanding of data pipelines, ETL processes, and data storage technologies.
• Proficiency in setting up CI/CD pipelines.
• Experience handling terabyte size datasets, diving into data to discover hidden
patterns, using data visualization tools, writing SQL.
• Understanding of provisioning a Jupyter Notebook server for ML experimentation.
• Strong communication skills to collaborate effectively with cross-functional teams
including data scientist, engineers, and stakeholders.
• Ability to troubleshoot and resolve complex issues related to model deployment and production systems.
• Ability to use statistical, algorithmic, data mining, and visualization techniques to model complex problems, find opportunities, discover solutions, and deliver actionable business insights.
• Advanced english proficiency is a requirement, as you will be working directly with US-based clients.
You will be accountable for the following responsibilities:
• Partner closely with Product, Engineering, Marketing, Sales, Finance and Data
Science teams to shape product strategy using rigorous scientific solutions
• Apply statistical, machine learning and desirable econometric models on large datasets to: i) measure results and outcomes of our current models and product strategies, ii) optimize user experience while minimizing fraud/risks
• Design, conduct, and analyze experiments to quantify the impact of product and operation changes
• Develop metrics to guide product development, and create dashboards for key performance indicators and deep dive to understand the drivers
• Drive the collection of new data and the refinement of existing data sources