Posted:1 week ago|
Platform:
Remote
Contractual
• Programming: Python, SQL, Spark
• Cloud Platforms: Azure, Snowflake
• Data Tools: DBT, Erwin Data Modeler, Apache Airflow , API Integrations, ADF
• Governance: Data masking, metadata management, SOX compliance
• Soft Skills: Communication, problem-solving, stakeholder engagement
• A degree is not required, as long as they have the right skillset and can commit to the work/projects assigned.
• There will likely be 2 rounds of interviews for this position.
As IRIS Data Engineer, you will work with Data Scientists and Data Architects to translate prototypes into scalable solutions.
Data Engineers are responsible for designing and building robust, scalable, and high-quality data pipelines that support analytics and reporting needs. This includes:
• Integration of structured and unstructured data from various sources into data lakes and warehouses.
• Build and maintain scalable ETL/ELT pipelines for batch and streaming data using Azure Data Factory, Databricks, Snowflake and Azure SQL Server, control -M.
• Collaborate with data scientists, analysts, and platform engineers to enable analytics and ML use cases.
• Design, develop, and optimise DBT models to support scalable data transformations.
They operationalize data solutions on cloud platforms, integrating services like Azure, Snowflake, and third-party technologies.
• Manage environments, performance tuning, and configuration for cloud-native data solutions.
• Apply dimensional modeling, star schemas, and data warehousing techniques to support business intelligence and machine learning workflows.
• Collaborate with solution architects and analysts to ensure models meet business needs.
• Ensure data integrity, privacy, and compliance through governance practices and secure schema design.
• Implement data masking, access controls, and metadata management for sensitive datasets.
• Work closely with cross-functional teams including product owners, architects, and business stakeholders to translate requirements into technical solutions.
• Participate in Agile ceremonies, sprint planning, and DevOps practices for continuous integration and deployment.
• 7+ years of data engineering or design experience, designing, developing, and deploying scalable enterprise data analytics solutions from source system through ingestion and reporting.
• 5+ years of experience in ML Lifecycle using Azure Kubernetes service, Azure Container Instance service, Azure Data Factory, Azure Monitor, Azure DataBricks building datasets, ML pipelines, experiments, logging, and monitoring. (Including Drifting, Model Adaptation and Data Collection).
• 5+ years of experience in data engineering using Snowflake.
• Experience in designing, developing & scaling complex data & feature pipelines feeding ML models and evaluating their performance.
• Experience in building and managing streaming and batch inferencing.
• Proficiency in SQL and any one other programming language (e.g., R, Python, C++, Minitab, SAS, Matlab, VBA – knowledge of optimization engines such as CPLEX or Gurobi is a plus).
• Strong experience with cloud platforms (AWS, Azure, etc.) and containerization technologies (Docker, Kubernetes).
• Experience with CI/CD tools such as GitHub Actions, GitLab, Jenkins, or similar tools.
• Familiarity with security best practices in DevOps and ML Ops.
• Experience in developing and maintaining APIs (e.g.: REST)
• Agile/Scrum operating experience using Azure DevOps.
• Experience with MS Cloud - ML Azure Databricks, Data Factory, Synapse, among others.
Radiant Systems Inc
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now0.9 - 2.0 Lacs P.A.
Salary: Not disclosed
Salary: Not disclosed