Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
7 - 11 Lacs
Mumbai
Remote
We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).
Posted 2 weeks ago
2.0 - 4.0 years
7 - 11 Lacs
Kolkata
Remote
We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).
Posted 2 weeks ago
2.0 - 4.0 years
7 - 11 Lacs
Bengaluru
Remote
We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).
Posted 2 weeks ago
10.0 - 15.0 years
0 Lacs
chennai, tamil nadu
On-site
Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As an organization with over 26 years of experience in delivering Software Product Development, Quality Engineering, and Digital Transformation Consulting Services to Global SMEs & Large Enterprises, CES has established long-term relationships with leading Fortune 500 Companies across various industries such as Automotive, AgTech, Bio Science, EdTech, FinTech, Manufacturing, Online Retailers, and Investment Banks. These relationships, spanning over a decade, are built on our commitment to timely delivery of quality services, investments in technology innovations, and fostering a true partnership mindset with our customers. In our current phase of exponential growth, we maintain a consistent focus on continuous improvement and a process-oriented culture. To further support our accelerated growth, we are seeking qualified and committed individuals to join us and play an exceptional role. You can learn more about us at: http://www.cesltd.com/ Experience with Azure Synapse Analytics is a key requirement for this role. The ideal candidate should have hands-on experience in designing, developing, and deploying solutions using Azure Synapse Analytics, including a good understanding of its various components such as SQL pools, Spark pools, and Integration Runtimes. Proficiency in Azure Data Lake Storage is also essential, with a deep understanding of its architecture, features, and best practices for managing a large-scale Data Lake or Lakehouse in an Azure environment. Moreover, the candidate should have experience with AI Tools and LLMs (e.g. GitHub Copilot, Copilot, ChatGPT) for automating responsibilities related to the role. Knowledge of Avro and Parquet file formats is required, including experience in data serialization, compression techniques, and schema evolution in a big data environment. Prior experience working with data in a healthcare or clinical laboratory setting is highly desirable, along with a strong understanding of PHI, GDPR, HIPPA, and HITRUST regulations. Relevant certifications such as Azure Data Engineer Associate or Azure Synapse Analytics Developer Associate are highly desirable for this position. The essential functions of the role include designing, developing, and maintaining data pipelines for ingestion, transformation, and loading of data into Azure Synapse Analytics, as well as working on data models, SQL queries, stored procedures, and other artifacts necessary for data processing and analysis. Successful candidates should possess proficiency in relational databases such as Oracle, Microsoft SQL Server, PostgreSQL, MySQL/MariaDB, strong SQL skills, experience in building ELT pipelines and data integration solutions, familiarity with data modeling and warehousing concepts, and excellent analytical and problem-solving abilities. Effective communication and collaboration skills are also crucial for collaborating with cross-functional teams. If you are a dedicated professional with the required expertise and skills, we invite you to join our team and contribute to our continued success in delivering exceptional services to our clients.,
Posted 2 weeks ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a proactive and detail-oriented L1 DataOps Monitoring Engineer to support our data pipeline operations. This role involves monitoring, identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and improve reliability. Roles and Responsibilities Key Responsibilities: Monitor data pipelines, jobs, and workflows using tools like Airflow, Control-M, or custom monitoring dashboards. Acknowledge and investigate alerts from monitoring tools (Datadog, Prometheus, Grafana, etc.). Perform first-level triage for job failures, delays, and anomalies. Log incidents and escalate to L2/L3 teams as per SOP. Maintain shift handover logs and daily operational reports. Perform routine system checks and health monitoring of data environments. Follow predefined runbooks to troubleshoot known issues. Coordinate with application, infrastructure, and support teams for timely resolution. Participate in shift rotations including nights/weekends/public holidays. Skills and Qualifications: Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). 0–2 years of experience in IT support, monitoring, or NOC environments. Basic understanding of data pipelines, ETL/ELT processes. Familiarity with monitoring tools (Datadog, Grafana, CloudWatch, etc.). Exposure to job schedulers (Airflow, Control-M, Autosys) is a plus. Good verbal and written communication skills. Ability to remain calm and effective under pressure. Willingness to work in a 24x7 rotational shift model. Good to Have (Optional): Knowledge of cloud platforms (AWS/GCP/Azure) Basic SQL or scripting knowledge (Shell/Python) ITIL awareness or ticketing systems experience (e.g., ServiceNow, JIRA)
Posted 2 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad
Hybrid
Required Skills: Bachelors degree in Computer Science, Information Systems, or a related field. Minimum 3 years of hands-on experience with SnapLogic or similar iPaaS tools (e.g., MuleSoft, Dell Boomi, Informatica). Strong integration skills with various databases (SQL, NoSQL) and enterprise systems. Proficiency in working with REST/SOAP APIs, JSON, XML, and data transformation techniques. Experience with cloud platforms (AWS, Azure, or GCP). Solid understanding of data flow, ETL/ELT processes, and integration patterns. Excellent analytical, problem-solving, and communication skills. Exposure to DevOps tools and CI/CD pipelines. Experience integrating with Enterprise platforms (e.g., Salesforce) Key Responsibilities: Design, develop, and maintain scalable integration pipelines using SnapLogic. Integrate diverse systems including relational databases, cloud platforms, SaaS applications, and on-premise systems. Collaborate with cross-functional teams to gather requirements and deliver robust integration solutions. Monitor, troubleshoot, and optimize SnapLogic pipelines for performance and reliability. Ensure data consistency, quality, and security across integrated systems. Maintain technical documentation and follow best practices in integration development.
Posted 2 weeks ago
8.0 - 12.0 years
0 - 0 Lacs
Hyderabad
Hybrid
Job Title: Lead Data Engineer Experience : 8+ years Job Type: Hybrid-3 days Location: Hyderabad Contract: 6+ months Mandatory Skills : Python, SQL, Snowflake (3+ years on each skill) Required Skills: Senior developer with approximately 8 years of experience in Data engineering Background in medium to large-scale client environments working on at least 3 or 4 projects Strong expertise in Data Engineering, ETL/ELT workflows Solid understanding of database concepts and data modeling Proficient in SQL, PL/SQL, and Python Snowflake experience (3+ years) with base or advanced certification Excellent communication skills ( written and verbal) Ability to work independently and proactively
Posted 2 weeks ago
4.0 - 5.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Data Engineer Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP & BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) Skills that would add advantage : DBT, Kafka Experience level : 4 5 years NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.
Posted 2 weeks ago
5.0 - 10.0 years
0 - 0 Lacs
chennai
Remote
Job Title: Data Engineer PySpark & AWS Location: Chennai Employment Type: Full-Time with Artech Experience Level: 4-10 years About the Role: We are seeking a highly skilled Data Engineer with strong expertise in PySpark and AWS to join our growing data team. In this role, you will be responsible for building, optimizing, and maintaining data pipelines and ETL workflows on the cloud, enabling large-scale data processing and analytics. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, accurate, and reliable for advanced analytics and reporting. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and Apache Spark . Develop and manage ETL/ELT workflows to ingest data from multiple structured and unstructured sources. Implement data transformation, cleansing, validation, and aggregation logic. Work with AWS cloud services such as S3, Glue, EMR, Lambda, Redshift, Athena , and CloudWatch . Monitor data pipelines for performance, reliability, and data quality . Collaborate with cross-functional teams to understand business data needs and translate them into technical solutions. Automate data engineering tasks and infrastructure using tools like Terraform or CloudFormation (optional). Maintain and document data architecture, job logic, and operational processes. Required Skills: 4+ years of experience as a Data Engineer or in a similar role. Strong hands-on experience with PySpark and Apache Spark for distributed data processing. Proficiency in Python programming for data manipulation and automation. Solid understanding of AWS services for data engineering: S3, Glue, EMR, Redshift, Lambda, Athena, CloudWatch Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Knowledge of data modeling, warehousing, and partitioning strategies . Experience with version control (Git) and CI/CD practices . Nice to Have: Experience with workflow orchestration tools (e.g., Airflow, Step Functions). Familiarity with Docker/Kubernetes for containerized deployments. Exposure to NoSQL databases (DynamoDB, MongoDB). Experience with Terraform or CloudFormation for infrastructure automation. Knowledge of Delta Lake and data lake architecture best practices. Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering , or a related field.
Posted 2 weeks ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 2 weeks ago
2.0 - 5.0 years
20 - 25 Lacs
Hyderabad
Work from Office
About the Role We are looking for an Analytics Engineer with 2+ years of experience to help build and maintain our modern data platform. You'll work with dbt , Snowflake , and Airflow to develop clean, well-documented, and trusted datasets. This is a hands-on role ideal for someone who wants to grow their technical skills while contributing to a high-impact analytics function. Key Responsibilities Build and maintain scalable data models using dbt and Snowflake Develop and orchestrate data pipelines with Airflow or similar tools Partner with teams across DAZN to translate business needs into robust datasets Ensure data quality through testing, validation, and monitoring practices Follow best practices in code versioning, CI/CD, and data documentation Contribute to the evolution of our data architecture and team standards What Were Looking For 2+ years of experience in analytics/data engineering or similar roles Strong skills in SQL and working knowledge of cloud data warehouses (Snowflake preferred) Experience with dbt for data modeling and transformation Familiarity with Airflow or other workflow orchestration tools Understanding of ELT processes, data modeling, and data governance principles Strong collaboration and communication skills Nice to Have Experience working in media, OTT, or sports technology domains Familiarity with BI tools like Looker , Tableau , or Power BI Exposure to testing frameworks like dbt tests or Great Expectations
Posted 2 weeks ago
8.0 - 12.0 years
30 - 35 Lacs
Hyderabad
Work from Office
Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage.
Posted 2 weeks ago
6.0 - 11.0 years
25 - 27 Lacs
Hyderabad
Work from Office
Overview We are seeking a highly skilled and experienced Azure Data Engineer to join our dynamic team. In this critical role, you will be responsible for designing, developing, and maintaining robust and scalable data solutions on the Microsoft Azure platform. You will work closely with data scientists, analysts, and business stakeholders to translate business requirements into effective data pipelines and data models. Responsibilities Design, develop, and implement data pipelines and ETL/ELT processes using Azure Data Factory, Azure Databricks, and other relevant Azure services. Develop and maintain data lakes and data warehouses on Azure, including Azure Data Lake Storage Gen2 and Azure Synapse Analytics. Build and optimize data models for data warehousing, data marts, and data lakes. Develop and implement data quality checks and data governance processes. Troubleshoot and resolve data-related issues. Collaborate with data scientists and analysts to support data exploration and analysis. Stay current with the latest advancements in cloud computing and data engineering technologies. Participate in all phases of the software development lifecycle, from requirements gathering to deployment and maintenance Qualifications 6+ years of experience in data engineering, with at least 3 years of experience working with Azure data services. Strong proficiency in SQL, Python, and other relevant programming languages. Experience with data warehousing and data lake architectures. Experience with ETL/ELT tools and technologies, such as Azure Data Factory, Azure Databricks, and Apache Spark. Experience with data modeling and data warehousing concepts. Experience with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Experience with Agile development methodologies. Bachelor's degree in Computer Science, Engineering, or a related field (Master's degree preferred). Relevant Azure certifications (e.g., Azure Data Engineer Associate) are a plus
Posted 2 weeks ago
3.0 - 5.0 years
2 - 3 Lacs
Kolkata
Work from Office
Qualification BCA. MCA preferable Required Skill Set 5+ years in Data Engineering, with at least 2 years on GCP/BigQuery Strong Python and SQL expertise (Airflow, dbt or similar) Deep understanding of ETL patterns, change-data-capture, and data-quality frameworks Experience with IoT or time-series data pipelines a plus Excellent communication skills and track record of leading cross-functional teams Job Description / Responsibilities Design, build, and maintain scalable ETL/ELT pipelines in Airflow and BigQuery Define and enforce data-modeling standards, naming conventions, and testing frameworks Develop and review core transformations: IoT enrichment (batch-ID assignment, stage tagging) Transactional ETL (ERPNext/MariaDB BigQuery) Finance automation pipelines (e.g., bank reconciliation) Create and manage schema definitions for staging, enriched_events, and erp_batch_overview tables Implement data-quality tests (using dbt or custom Airflow operators) and oversee QA handoff Collaborate closely with DevOps to ensure CI/CD, monitoring, and cost-efficient operations Drive documentation, runbooks, and knowledge transfer sessions Mentor and coordinate with freelance data engineers and analytics team members Desired profile of the Proficiency in Python and SQL , including working with Airflow and dbt or similar tools. Strong understanding of ETL/ELT design patterns , CDC (Change Data Capture) , and data governance best practices. Excellent communication skills and the ability to translate technical requirements into business outcomes.
Posted 2 weeks ago
4.0 - 6.0 years
5 - 13 Lacs
Pune
Hybrid
Job Description : This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise level systems. Major Responsibilities: Adhere to standard coding principles and standards. Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. Experience of DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQL Experience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize ETL jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Checking in, checkout and peer review and merging PRs into git Repo. Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. Skills: 3+ years Python coding experience. 5+ years - SQL Server based development of large datasets 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. Experience with Cloud based data architectures, messaging, and analytics. Cloud certification(s). Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus.
Posted 2 weeks ago
6.0 - 11.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Minimum of 6+ years of experience in IT IndustryCreating data models, building data pipelines, and deploying fully operational data warehouses within Snowflake Writing and optimizing SQL queries, tuning database performance, and identifying and resolving performance bottlenecks Integrating Snowflake with other tools and platforms, including ETL/ELT processes and third party applications Implementing data governance policies, maintaining data integrity, and managing access controls Creating and maintaining technical documentation for data solutions, including data models, architecture, and processes Familiarity with cloud platforms and their integration with Snowflake Basic coding skills in languages like Python or Java can be helpful for scripting and automation Outstanding ability to communicate, both verbally and in writing Strong analytical and problem solving skills Experience in Banking domain
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Warehouse Engineer at Myridius, you will be responsible for working with solid SQL language skills and possessing basic knowledge of data modeling. Your role will involve collaborating with Snowflake in Azure, CI/CD process using any tooling. Additionally, familiarity with Azure ADF and ETL/ELT frameworks would be beneficial for this position. It would be advantageous to have experience in ER/Studio and a good understanding of Healthcare/life sciences industry. Knowledge of GxP processes will be a plus in this role. For a Senior Data Warehouse Engineer position, you will be overseeing engineers while actively engaging in the same tasks. Your responsibilities will include conducting design reviews, code reviews, and deployment reviews with engineers. You should have expertise in solid data modeling, preferably using ER/Studio or an equivalent tool. Optimizing Snowflake SQL queries to enhance performance and familiarity with medallion architecture will be key aspects of this role. At Myridius, we are dedicated to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we drive a new vision to propel organizations through rapidly evolving technology and business landscapes. Our commitment to exceeding expectations ensures measurable impact and fosters sustainable innovation. Together with our clients, we co-create solutions that anticipate future trends and help businesses thrive in a world of continuous change. If you are passionate about driving significant growth and maintaining a competitive edge in the global market, join Myridius in crafting transformative outcomes and elevating businesses to new heights of innovation. Visit www.myridius.com to learn more about how we lead the change.,
Posted 2 weeks ago
10.0 - 15.0 years
0 Lacs
chennai, tamil nadu
On-site
Are you a hands-on Data Architect who excels at tackling intricate data challenges within structured and unstructured sources Are you passionate about crafting micro data lakes and spearheading enterprise-wide data strategies If this resonates with you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specific to the lending domain. Additionally, you will play a key role in defining and executing enterprise data strategies encompassing modeling, lineage, and governance. Your tasks will involve architecting and implementing robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from various sources such as APIs, PDFs, logs, and databases. Establishing best practices for data quality, metadata management, and data lifecycle control will also be part of your core responsibilities. Collaboration with engineering and product teams to align data architecture with business objectives will be crucial, as well as evaluating and integrating modern data platforms and tools like Databricks, Spark, Kafka, Snowflake, AWS, GCP, and Azure. Furthermore, you will mentor data engineers and promote engineering excellence in data practices. The ideal candidate for this role should possess a minimum of 10 years of experience in data architecture and engineering, along with a profound understanding of structured and unstructured data ecosystems. Hands-on proficiency in ETL, ELT, stream processing, querying, and data modeling is essential, as well as expertise in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python. Familiarity with cloud-native data platforms like AWS, Azure, or GCP is required, alongside a solid foundation in data governance, privacy, and compliance standards. A strategic mindset coupled with the ability to execute hands-on tasks when necessary is highly valued. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. As part of our team, you will have the opportunity to lead data-first transformation and develop products that drive AI adoption. You will enjoy the autonomy to design, build, and scale modern data architecture within a forward-thinking, collaborative, and tech-driven culture. Additionally, you will have access to the latest tools and technologies in the data ecosystem. Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office If you are ready to shape the future of data alongside us, we invite you to apply now and embark on this exciting journey!,
Posted 2 weeks ago
3.0 - 6.0 years
14 - 18 Lacs
Mumbai
Work from Office
Overview Data Technology group in MSCI is responsible to build and maintain state-of-the-art data management platform that delivers Reference. Market & other critical datapoints to various products of the firm. The platform, hosted on firms’ data centers and Azure & GCP public cloud, processes 100 TB+ data and is expected to run 24*7. With increased focus on automation around systems development and operations, Data Science based quality control and cloud migration, several tech stack modernization initiatives are currently in progress. To accomplish these initiatives, we are seeking a highly motivated and innovative individual to join the Data Engineering team for the purpose of supporting our next generation of developer tools and infrastructure. The team is the hub around which Engineering, and Operations team revolves for automation and is committed to provide self-serve tools to our internal customers. The position is based in Mumbai, India office. Responsibilities Build and maintain ETL pipelines for Snowflake. Manage Snowflake objects and data models. Integrate data from various sources. Optimize performance and query efficiency. Automate and schedule data workflows. Ensure data quality and reliability. Collaborate with cross-functional teams. Document processes and data flows. Qualifications Self-motivated, collaborative individual with passion for excellence B.E Computer Science or equivalent with 5+ years of total experience and at least 2 years of experience in working with Databases Good working knowledge of source control applications like git with prior experience of building deployment workflows using this tool Good working knowledge of Snowflake YAML, Python Experience managing Snowflake databases, schemas, tables, and other objects Proficient in Snowflake SQL, including CTEs, window functions, and stored procedures Familiar with Snowflake performance tuning and cost optimization tools Skilled in building ETL/ELT pipelines using dbt, Airflow, or Python Able to work with various data sources including RDBMS, APIs, and cloud storage Understanding of incremental loads, error handling, and scheduling best practices Strong SQL skills and intermediate Python proficiency for data processing Familiar with Git for version control and collaboration Basic knowledge of Azure, or GCP cloud platforms Capable of integrating Snowflake with APIs and cloud-native services What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 2 weeks ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about Target in India At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. About the Role As a Senior RBX Data Specialist at Target in India, involves the end-to-end management of data, encompassing building and maintaining pipelines through ETL/ELT and data modeling, ensuring data accuracy and system performance, and resolving data flow issues. It also requires analyzing data to generate insights, creating visualizations for stakeholders, automating processes for efficiency, and effective collaboration across both business and technical teams. You will also answer ad-hoc questions from your business users by conducting quick analysis on relevant data, identify trends and correlations, and form hypotheses to explain the observations. Some of this will lead to bigger projects of increased complexity, where you will have to work as a part of a bigger team, but also independently execute specific tasks. Finally, you are expected to always adhere to project schedule and technical rigor as well as requirements for documentation, code versioning, etc Key Responsibilities Data Pipeline and MaintenanceMonitor data pipelines and warehousing systems to ensure optimal health and performance. Ensure data integrity and accuracy throughout the data lifecycle. Incident Management and ResolutionDrive the resolution of data incidents and document their causes and fixes, collaborating with teams to prevent recurrence. Automation and Process ImprovementIdentify and implement automation opportunities and Data Ops best practices to enhance the efficiency, reliability, and scalability of data processes. Collaboration and CommunicationWork closely with data teams and stakeholders, to understand data pipeline architecture and dependencies, ensuring timely and accurate data delivery while effectively communicating data issues and participating in relevant discussions. Data Quality and GovernanceImplement and enforce data quality standards, monitor metrics for improvement, and support data governance by ensuring policy compliance. Documentation and ReportingCreate and maintain clear and concise documentation of data pipelines, processes, and troubleshooting steps. Develop and generate reports on data operations performance and key metrics. Core responsibilities are described within this job description. Job duties may change at any time due to business needs. About You B.Tech / B.E. or equivalent (completed) degree 5+ years of relevant work experience Experience in Marketing/Customer/Loyalty/Retail analytics is preferable Exposure to A/B testing Familiarity with big data technologies, data languages and visualization tools Exposure to languages such as Python and R for data analysis and modelling Proficiency in SQL for data extraction, manipulation, and analysis, with experience in big data query frameworks such as Hive, Presto, SQL, or BigQuery Solid foundation knowledge in mathematics, statistics, and predictive modelling techniques, including Linear Regression, Logistic Regression, time-series models, and classification techniques. Ability to simplify complex technical and analytical methodologies for easier comprehension for broad audiences. Ability to identify process and tool improvements and implement change Excellent written and verbal English communication skills for Global working Motivation to initiate, build and maintain global partnerships Ability to function in group and/or individual settings. Willing and able to work from our office location (Bangalore HQ) as required by business needs and brand initiatives Useful Links- Life at Target- https://india.target.com/Benefits- https://india.target.com/life-at-target/workplace/benefitsCulture- https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
3.0 - 7.0 years
6 - 10 Lacs
Kolkata
Work from Office
In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. As a Quality Engineer/tester work with client and IBM stakeholders to identify various business controls for client and work with respective IBM and client to ensure successful implementation of controls identified. Your primary responsibilities include: Software Tester at AT&T for BGWIOT - Microservices and Databricks;Production Support Analyze test specifications and convert them into Manual/Auto Test Cases. Identify the initial setup, input data, appropriate steps, and the expected response in the manual test cases. Conduct sanity testing of the application based on user requirements. Involve in writing test cases based on the user stories. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Hands-on experience on API testing using POSTMAN/SOAP-UI,ELT testing with Databricks and MySQL Expertise in Test management, data management, defect management and hands-on exp on Agile methodology including tools like JIRA, ADO, iTrack Sound knowledge on Telecom Domain and billing system. Hands-on experiences on handling production support especially the reporting system. Experience on Kubernetes for validation and analysis of logs/data Preferred technical and professional experience Automation testing using TOSCA. Experience on writing complex SQL queries. Support, coordination, and responsibility of the testing activities across shores.
Posted 2 weeks ago
5.0 - 8.0 years
15 - 30 Lacs
Hyderabad
Work from Office
5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.
Posted 2 weeks ago
5.0 - 8.0 years
15 - 30 Lacs
Hyderabad
Work from Office
5+ yrs exp as a Data Engineer with a strong track record of designing and implementing complex data solutions. Expert in SQL for data manipulation, analysis, and optimization. Strong programming skills in Python for data engineering tasks.
Posted 2 weeks ago
8.0 - 10.0 years
13 - 18 Lacs
Chennai
Work from Office
Core Qualifications 12+ years in software/data architecture with hands on experience. Agentic AI & AWS Bedrock (MustHave): Demonstrated handson design, deployment, and operational experience with Agentic AI solutions leveraging AWS Bedrock and AWS Bedrock Agents . Deep expertise in cloud-native architectures on AWS (compute, storage, networking, security). Proven track record defining technology stacks across microservices, event streaming, and modern data platforms (e.g., Snowflake, Databricks). Proficiency with CI/CD and IaC (Azure DevOps, Terraform). Strong knowledge of data modeling, API design (REST/GraphQL), and integration patterns (ETL/ELT, CDC, messaging). Excellent communication and stakeholder-management skillsable to translate complex tech into business value. Preferred Media or broadcasting industry experience. Familiarity with Salesforce, or other enterprise iPaaS solutions. Certifications: AWS/Azure/GCP Architect , Salesforce Integration Architect , TOGAF . Mandatory Skills: Generative AI.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough