Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
10 - 18 Lacs
chandigarh
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key Skills Data analytics,ETL,SQL,Python,Google Big Query,AWS Redshift,Data architecture*
Posted -1 days ago
3.0 - 8.0 years
9 - 13 Lacs
bengaluru
Work from Office
About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components.Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to design and implement data platform solutions.- Develop and maintain data pipelines for efficient data processing.- Implement data security and privacy measures to protect sensitive information.- Optimize data storage and retrieval processes for improved performance.- Conduct regular data platform performance monitoring and troubleshooting.Professional & Technical Skills: - Must Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data platforms.- Experience with data modeling and database design.- Hands-on experience with ETL processes and tools.- Knowledge of data governance and compliance standards. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Ahmedabad office. Educational Qualification:- 15 years full time education is required. Qualification 15 years full time education
Posted -1 days ago
2.0 - 6.0 years
4 - 7 Lacs
bengaluru
Work from Office
Diverse Lynx is looking for Snaplogic Developer to join our dynamic team and embark on a rewarding career journey A Developer is responsible for designing, developing, and maintaining software applications and systems They collaborate with a team of software developers, designers, and stakeholders to create software solutions that meet the needs of the business Key responsibilities:Design, code, test, and debug software applications and systemsCollaborate with cross-functional teams to identify and resolve software issuesWrite clean, efficient, and well-documented codeStay current with emerging technologies and industry trendsParticipate in code reviews to ensure code quality and adherence to coding standardsParticipate in the full software development life cycle, from requirement gathering to deploymentProvide technical support and troubleshooting for production issues Requirements:Strong programming skills in one or more programming languages, such as Python, Java, C++, or JavaScriptExperience with software development tools, such as version control systems (e g Git), integrated development environments (IDEs), and debugging toolsFamiliarity with software design patterns and best practicesGood communication and collaboration skills
Posted -1 days ago
8.0 - 13.0 years
5 - 9 Lacs
bengaluru
Work from Office
Install, configure, and maintain database systems (e.g., Oracle, SQL Server, MySQL, PostgreSQL, or cloud-based databases). Monitor database performance and proactively tune queries, indexing, and configurations to ensure optimal efficiency. Manage database security, including role-based access control, encryption, and auditing. Oversee backup, recovery, high availability (HA), and disaster recovery (DR) strategies. Perform database upgrades, patching, migrations, and capacity planning. Collaborate with developers to optimize queries, stored procedures, and schema design. Automate routine tasks using scripts (Python, Bash, PowerShell, etc.). Implement and manage replication, clustering, and failover solutions. Maintain documentation of database configurations, policies, and procedures. Stay updated with emerging technologies, best practices, and compliance requirements. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience). 8+ years of hands-on experience as a Database Administrator. Strong expertise in at least one major RDBMS (Oracle, SQL Server, PostgreSQL, MySQL). Proficiency in performance tuning, query optimization, and troubleshooting. Experience in exadata and rac setup. Experience with HA/DR solutions such as Always On, Patroni, Data Guard, or replication technologies. Should have fundamental knowledge of linux/unix and different kinds of storages. Knowledge of cloud database services (AWS RDS/Redshift, Azure SQL, Google Cloud SQL/BigQuery). Solid understanding of data security, compliance standards (GDPR, HIPAA, PCI-DSS). Strong scripting and automation skills (Python, Shell, PowerShell, etc.). Should be able to perform multi region replication and create database images in containers. Excellent analytical, problem-solving, and communication skills. Preferred technical and professional experience Certifications such as Oracle Certified Professional (OCP), Microsoft CertifiedAzure Database Administrator, AWS Certified Database Specialty Familiarity with DevOps practices, CI/CD pipelines, and database version control tools (Liquibase, Flyway). Exposure to big data technologies (Hadoop, Spark).
Posted -1 days ago
4.0 - 9.0 years
0 - 1 Lacs
bengaluru
Work from Office
Cloud native application with AWS Lambda ECS Fargate DynamoDB SNS SQS Automate infrastructure using AWS CDK CloudFormation VPC IAM Subnets Security Groups from network to application layer ETL pipelines SQL Apache Spark AWS Redshift Athena S3 Iceberg
Posted -1 days ago
3.0 - 8.0 years
10 - 18 Lacs
varanasi
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key SkillsETL pipelines,data warehouses,SQL,Python,AWS Redshift,Google BigQuery,ETL*
Posted -1 days ago
3.0 - 8.0 years
10 - 18 Lacs
coimbatore
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key SkillsPython,AWS Redshift,Google BigQuery,ETL pipelines,data warehousing,data architectures,SQL*
Posted -1 days ago
3.0 - 8.0 years
10 - 18 Lacs
mysuru
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key SkillsData analytics,ETL,SQL,Python,Google Big Query,AWS Redshift,Data architecture*
Posted -1 days ago
3.0 - 8.0 years
6 - 15 Lacs
bengaluru
Remote
We are looking for an experienced Data Engineer to join our team and support the Data Analytics Platform. This role focuses on building and maintaining scalable, secure, and high-performance data infrastructure using AWS services.
Posted -1 days ago
3.0 - 8.0 years
10 - 18 Lacs
kanpur
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key Skillspython,data warehousing,etl,amazon redshift,bigquery,data engineering,data architecture,aws,machine learning,data flow,etl pipelines,real-time data processing,java,spring boot,microservices,spark,kafka,cassandra,scala,nosql,mongodb,rest,redis,SQL*
Posted -1 days ago
3.0 - 8.0 years
10 - 18 Lacs
nagpur
Work from Office
Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key SkillsSQL,Python,data warehousing,etl,amazon redshift,bigquery,data engineering,AWS,machine learning,real-time data processing,Java,spring boot,microservices,spark,Kafka,Cassandra,Scala,NoSQL,mongodb,Redis,data architecture*
Posted -1 days ago
7.0 - 12.0 years
25 - 40 Lacs
kochi, bengaluru
Work from Office
Expertise in Tableau Experience integrating with Redshift or other DWH Platforms and understanding DWH concepts. Python or similar scripting languages Experience with AWS Glue, Athena, or other AWS data services.Strong SQL skills, AWS QuickSight
Posted -1 days ago
7.0 - 10.0 years
25 - 32 Lacs
bengaluru
Work from Office
Position Overview: We seek a highly skilled and experienced Data Engineering Lead to join our team. This role demands deep technical expertise in Apache Spark, Hive, Trino (formerly Presto), Python, AWS Glue, and the broader AWS ecosystem. The ideal candidate will possess strong hands-on skills and the ability to design and implement scalable data solutions, optimise performance, and lead a high-performing team to deliver data-driven insights. Key Responsibilities: Technical Leadership Lead and mentor a team of data engineers, fostering best practices in coding, design, and delivery. Drive the adoption of modern data engineering frameworks, tools, and methodologies to ensure high-quality and scalable solutions. Translate complex business requirements into effective data pipelines, architectures, and workflows. Data Pipeline Development Architect, develop, and optimize scalable ETL/ELT pipelines using Apache Spark, Hive, AWS Glue, and Trino. Handle complex data workflows across structured and unstructured data sources, ensuring performance and cost-efficiency. Develop real-time and batch processing systems to support business intelligence, analytics, and machine learning applications. Cloud & Infrastructure Management Build and maintain cloud-based data solutions using AWS services like S3, Athena, Redshift, EMR, DynamoDB, and Lambda. Design and implement federated query capabilities using Trino for diverse data sources. Manage Hive Metastore for schema and metadata management in data lakes. Performance Optimization Optimize Apache Spark jobs and Hive queries for performance, ensuring efficient resource utilization and minimal latency. Implement caching and indexing strategies to accelerate query performance in Trino. Continuously monitor and improve system performance through diagnostics and tuning. Collaboration & Stakeholder Engagement Work closely with data scientists, analysts, and business teams to understand requirements and deliver actionable insights. Ensure that data infrastructure aligns with organizational goals and compliance standards. Data Governance & Quality Establish and enforce data quality standards, governance practices, and monitoring processes. Ensure data security, privacy, and compliance with regulatory frameworks. Innovation & Continuous Learning Stay ahead of industry trends, emerging technologies, and best practices in data engineering. Proactively identify and implement improvements in data architecture and processes. Qualifications: Required Technical Expertise Advanced proficiency with Apache Spark (core, SQL, streaming) for large-scale data processing. Strong expertise in Hive for querying and managing structured data in data lakes. In-depth knowledge of Trino (Presto) for federated querying and high-performance SQL execution. Solid programming skills in Python with frameworks like PySpark and Pandas. Hands-on experience with AWS Glue, including Glue ETL jobs, Glue Data Catalog, and Glue Crawlers. Deep understanding of data formats such as Parquet, ORC, Avro, and their use cases. Cloud Proficiency Expertise in AWS services, including S3, Redshift, Athena, EMR, DynamoDB, and IAM. Experience designing scalable and cost-efficient cloud-based data solutions. Performance Tuning Strong ability to optimize Apache Spark jobs, Hive queries, and Trino workloads for distributed environments. Experience with advanced techniques like partitioning, bucketing, and query plan optimization. Leadership & Collaboration Proven experience leading and mentoring data engineering teams. Strong communication skills, with the ability to interact with technical and non-technical stakeholders effectively. Education & Experience Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. 8+ years of experience in data engineering with a minimum of 2 years in a leadership role. Qualifications: 8+ years of experience in building data pipelines from scratch in large data volume environments AWS certifications, such as AWS Certified Data Analytics or AWS Certified Solutions Architect. Experience with Kafka or Kinesis for real-time data streaming would be a plus. Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Knowledge of CI/CD pipelines and DevOps practices for data engineering. Prior experience with data lake architectures and integrating ML workflows. Mandatory Key SkillsCI/CD,DevOps,data engineering,Apache Spark jobs,Hive queries,Performance Tuning,AWS Glue,Data Governance,AWS*,Spark*,Python*,Hive*,ETL*
Posted -1 days ago
6.0 - 8.0 years
10 - 14 Lacs
bengaluru
Work from Office
Job Overview We are seeking an experienced and highly skilled Sr Analytics Engineer to join our high-performance Analytics team. As a Senior Analytics Engineer, you will play a key role in driving data-driven decision-making processes and providing valuable insights to support business strategies. The ideal candidate should have a strong background in business intelligence, data analysis, and Data Modelling. In this role, you will have the opportunity to work with various departments in Channel BU to extract and interpret data to derive valuable insights to support multiple business units/functions, develop and implement analytics solutions and track key performance metrics to improve business performance. You will also oversee developing data models and dashboards. This is the ideal role for someone who loves working with numbers and data modelling and would like to gain practical experience in a business environment. Job Requirements Design, develop, and maintain interactive analytics solutions for leadership and stakeholders. Build, optimize, and maintain semantic models, star/snowflake schemas, and data models to support high-performance, scalable self-service analytics. Architect and manage data warehouse structures, including fact and dimension tables, ETL/ELT pipelines, and incremental data loading strategies. Develop advanced analytical metrics and calculations using DAX, LOD expressions, SQL window functions, and aggregate functions. Drive governance, best practices, and standardization for BI solutions, including data modeling, visualization, usability, and data quality frameworks. Collaborate with business teams to translate complex requirements into actionable, high-performance analytical solutions. Leverage Power Query or Tableau Prep or Python for advanced data transformations and automated data preparation workflows. Optimize data preparation workflows, automation, and performance tuning for large-scale datasets. Implement row-level security, role-based access, and metadata management to ensure secure and governed analytics solutions. Implement metadata management, version control, and documentation for analytics solutions to ensure maintainability and scalability. Communicate insights, patterns, and trends effectively through storytelling with data, enabling data-driven strategic decisions. Participate in Agile delivery processes, including sprint planning, backlog grooming, and retrospectives to ensure timely delivery of analytics solutions. What your background should look like 6- 8 years of hands-on experience in business intelligence, designing, building visualizations and dashboards using Tableau & Power BI. Must have strong analytical skills. Analytical skills are used to identify/evaluate problem situations for improvement opportunities and make recommendations as applicable. 2+ years of experience in AWS Cloud ex. Working knowledge on AWS Redshift, S3 and great experience on developing Semantic Data model, detail understanding of Data warehousing concepts. 2+ years of experience in Exploratory Data Analysis (EDA) and automating processes using Python. Excellent hands-on experience with Tableau Dashboard, Power BI, Tableau Server and understanding the data warehouse concepts. Exposure to version control and collaborative development using Git or similar tools. Experience with generative AI and natural language tools such as Microsoft Copilot, Salesforce Einstein, or Databricks Ginni is a plus. Good working Experience on Python, SQL, Power Query, Tableau Prep (Nice to have)
Posted -1 days ago
6.0 - 9.0 years
10 - 15 Lacs
noida
Work from Office
Design and implement scalable data processing solutions usingApache SparkandJava. Develop and maintain high-performance backend services and APIs. Collaborate with data scientists, analysts, and other engineers to understand data requirements. Optimize Spark jobs for performance and cost-efficiency. Ensure code quality through unit testing, integration testing, and code reviews. Work with large-scale datasets in distributed environments (e.g., Hadoop, AWS EMR, Databricks). Monitor and troubleshoot production systems and pipelines. Experience in Agile Development Process. Experience in leading a 3-5 member team on the technology front Excellent communication skills, problem solving and debugging and troubleshooting Skills. Mandatory Competencies Programming Language - Java - Core Java (java 8+) Architecture - Architectural Patterns - Microservices Data Science and Machine Learning - Data Science and Machine Learning - Apache Spark Tech - Unit Testing Data Science and Machine Learning - Data Science and Machine Learning - Databricks Big Data - Big Data - Hadoop Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Agile - Agile - Extreme Programming Big Data - Big Data - SPARK Beh - Communication and collaboration.
Posted 1 hour ago
4.0 - 8.0 years
7 - 11 Lacs
noida
Work from Office
Design and implement scalable data processing solutions usingApache SparkandJava. Develop and maintain high-performance backend services and APIs. Collaborate with data scientists, analysts, and other engineers to understand data requirements. Optimize Spark jobs for performance and cost-efficiency. Ensure code quality through unit testing, integration testing, and code reviews. Work with large-scale datasets in distributed environments (e.g., Hadoop, AWS EMR, Databricks). Monitor and troubleshoot production systems and pipelines. Experience in Agile Development Process. Excellent communication skills, problem solving and debugging and troubleshooting Skills. Mandatory Competencies Architecture - Architectural Patterns - Microservices Programming Language - Java - Core Java (java 8+) Data Science and Machine Learning - Data Science and Machine Learning - Apache Spark Tech - Unit Testing Big Data - Big Data - Hadoop Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Data Science and Machine Learning - Data Science and Machine Learning - Databricks Agile - Agile - Extreme Programming Beh - Communication and collaboration.
Posted 1 hour ago
6.0 - 10.0 years
5 - 9 Lacs
noida
Work from Office
AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL.
Posted 1 hour ago
8.0 - 10.0 years
9 - 13 Lacs
noida
Work from Office
AWS data developers with 8-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow.
Posted 1 hour ago
2.0 - 3.0 years
6 - 7 Lacs
coimbatore
Work from Office
ETL Developer Job Title: ETL Developer FTE Location: Coimbatore Start Date: ASAP Job Summary: We are looking for an experienced ETL Developer with strong expertise in Apache Airflow , Redshift , and SQL-based data pipelines, with upcoming transitions to Snowflake . This is a contract role based in Coimbatore, ideal for professionals who can independently deliver high-quality ETL solutions in a cloud-native, fast-paced environment. Key Responsibilities: 1. ETL Design and Development: Design and develop scalable and modular ETL pipelines using Apache Airflow , with orchestration and monitoring capabilities. Translate business requirements into robust data transformation pipelines across cloud data platforms. Develop reusable ETL components to support a configuration-driven architecture. 2. Data Integration and Transformation: Integrate data from multiple sources: Redshift , flat files, APIs, Excel, and relational databases. Implement transformation logic such as cleansing, standardization, enrichment, and deduplication. Manage incremental and full loads, along with SCD handling strategies. 3. SQL and Database Development: Write performant SQL queries for data staging and transformation within Redshift and Snowflake . Utilize joins, window functions, and aggregations effectively. Ensure indexing and query tuning for high-performance workloads. 4. Performance Tuning: Optimize data pipelines and orchestrations for large-scale data volumes. Tune SQL queries and monitor execution plans. Implement best practices in distributed data processing and cloud-native optimizations. 5. Error Handling and Logging: Implement robust error handling and logging in Airflow DAGs. Enable retry logic, alerting mechanisms, and failure notifications. 6. Testing and Quality Assurance: Conduct unit and integration testing of ETL jobs. Validate data outputs against business rules and source systems. Support QA during UAT cycles and help resolve data defects. 7. Deployment and Scheduling: Deploy pipelines using Git-based CI/CD practices. Schedule and monitor DAGs using Apache Airflow and integrated tools. Troubleshoot failures and ensure data pipeline reliability. 8. Documentation and Maintenance: Document data flows, DAG configurations, transformation logic, and operational procedures. Maintain change logs and update job dependency charts. 9. Collaboration and Communication: Work closely with data architects, analysts, and BI teams to define and fulfill data needs. Participate in stand-ups, sprint planning, and post-deployment reviews. 10. Compliance and Best Practices: Ensure ETL processes adhere to data security, governance, and privacy regulations (HIPAA, GDPR, etc.). Follow naming conventions, version control standards, and deployment protocols. Required Skills & Experience: 3–6 years of hands-on experience in ETL development. Proven experience with Apache Airflow , Amazon Redshift , and strong SQL. Strong understanding of data warehousing concepts and cloud-based data ecosystems. Familiarity with handling flat files, APIs, and external sources. Experience with job orchestration, error handling, and scalable transformation patterns. Ability to work independently and meet deadlines. Preferred Skills: Exposure to Snowflake or plans to migrate to Snowflake platforms. Experience in healthcare , life sciences , or regulated environments is a plus. Familiarity with Azure Data Factory , Power BI , or other cloud BI tools. Knowledge of Git, Azure DevOps, or other version control and CI/CD platforms. Role & responsibilities Preferred candidate profile
Posted 1 day ago
5.0 - 10.0 years
10 - 20 Lacs
pune
Work from Office
We are seeking a highly skilled Data Architect with expertise in AWS cloud services and data integration to join our team. The ideal candidate will be responsible for architecting, designing, and implementing scalable, secure, and high-performance data solutions. This role requires close collaboration with cross-functional teams and leadership in driving technical excellence across all stages of the data lifecycle. Sr. Data Engineer (AWS Tech Stack) Location: Pune (WFO) Experience : 5+ Years Roles & Responsibilities Architect and Design: Design and implement data integration solutions using AWS services. Develop architecture blueprints and detailed documentation. Ensure solutions are scalable, secure, and aligned with best practices. Implementation of Logs, Alerting and Monitoring at various stages of Data flow. Data Management: Manage and optimize ETL processes to ensure efficient data flow. Integrate data from various sources into data lakes, data warehouses, and other storage solutions. Ensure data quality, consistency, and integrity across the data lifecycle. Data Pipeline Management Ensuring the data ingestions and pipelines are robust and scalable for huge data sets. Collaboration and Leadership: Work closely with data engineers, data scientists, and other stakeholders to understand data requirements and deliver solutions. Lead technical discussions and provide guidance to the development team. Conduct code reviews and ensure adherence to coding standards. Performance Optimization: Monitor and optimize the performance of data integration processes. Troubleshoot and resolve issues related to data integration and ETL pipelines. Implement and manage data security and compliance measures. Semantic and Aggregation layer design and review Dashboard performance tuning (Looker) Qualifications: Must have Education and Experience: Bachelors or Masters degree in Computer Science, Information Technology, or related field. Minimum of 5 years of experience in data integration and ETL processes. Extensive experience with AWS services such as AWS Glue, AWS Lambda, Amazon Redshift, Amazon S3, and AWS Data Pipeline. Technical Skills: Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL). Strong programming skills in Python Experience with data modeling, data warehousing, and big data technologies. Familiarity with data governance and data security best practices. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Good to Have AWS Certified Solutions Architect or equivalent certification. Experience with machine learning and AI technologies. Familiarity with other cloud platforms (Google Cloud). Experience and exposure to US health care and Life science GCP Look ML Apply today to be part of an innovative team driving data excellence through cutting-edge AWS technologies on below mail Id kiran.ghorpade@neutrinotechlabs.com
Posted 1 day ago
4.0 - 6.0 years
20 - 24 Lacs
bengaluru
Work from Office
Overview We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We currently have 2500+ awesome colleagues (in Annalect India) who are committed to solve our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together . Responsibilities Requirement gathering and evaluation of clients’ business situations in order to implement appropriate analytic solutions. Designs, generates and manages reporting frameworks that provide insight as to the performance of clients’ marketing activities across multiple channels. Be the single point of contact on anything data & analytics related to the project. QA process: Maintain, create and re-view QA plans for deliverables to align with the requirements, identify discrepancies if any and troubleshoot issues. Prioritize tasks and proactively manage workload to ensure timely delivery with high accuracy. Active contribution to project planning and scheduling. Create and maintain project specific documents such as process / quality / learning documents. Should be able to drive conversation with team, client and business stake holders Experience in managing global clients with strong account management background Strong relationship building skills & Excellent project and resource management skills Maintaining positive client and vendor relationships. Qualifications 10+ years of experience in media/marketing services or relevant domains with strong problem-solving ability. Strong knowledge on Advance SQL, Redshift, Alteryx, Tableau, Media knowledge, Data modeling, Advance excel are mandatory to have. Adverity and Python are good-to-have. Ability to identify and help determine key performance indicators for the clients. Strong written and verbal communication skills. Led delivery teams and projects to successful implementations. Familiarity working with large data sets and creating cohesive stories. Able to work and lead successfully with teams, handling multiple projects and meeting timelines. Presentation skills using MS Power Point or any presentation platforms Strong written and verbal communication skills Resourceful and self-motivated Strong project management and administrative skills Strong analytical skills with superior attention to detail and Demonstrate strong problem solving and troubleshooting skills
Posted 2 days ago
4.0 - 6.0 years
8 - 13 Lacs
saidapet, tamil nadu
Work from Office
Introduction to the Role: Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As aData Engineer, you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus onPython,SQL,AWS,PySpark, andDatabricks, you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment. We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads. Accountabilities: Design, build, and optimize scalable data pipelines usingPySpark,Databricks, andSQLonAWS cloud platforms. Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery. Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data). Develop reusable, parameterized ETL/ELT components and data ingestion frameworks. Perform data transformation, cleansing, validation, and enrichment usingPythonandPySpark. Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives. Apply best practices in software engineering, version control (Git), code reviews, and agile development processes. Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms. Optimize performance of distributed data processing workflows and large datasets. Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design. Participate in data governance practices and ensure compliance with data privacy, security, and quality standards. Contribute to documentation of processes, workflows, metadata, and lineage using tools such asData CatalogsorCollibra(if applicable). Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality. Essential Skills / Experience: 4 to 6 yearsof professional experience inData Engineeringor a related field. Strong programming experience withPythonand experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimizedSQLqueries on large-scale datasets. Solid hands-on experience withPySparkand distributed data processing frameworks. Expertise working withDatabricksfor developing and orchestrating data pipelines. Experience withAWS cloudservices such asS3,Glue,EMR,Athena,Redshift, andLambda. Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools likeAirflow,Databricks Jobs, orAWS Step Functions. Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure todata observability,monitoring, andalertingframeworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.
Posted 2 days ago
5.0 - 10.0 years
9 - 13 Lacs
hyderabad
Work from Office
Overview As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Bachelors degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills.
Posted 2 days ago
3.0 - 8.0 years
2 - 6 Lacs
pune
Work from Office
Job Purpose ICE Mortgage Technology is driving value to every customer through our effort to automate everything that can be automated in the residential mortgage industry. Our integrated solutions touch each aspect of the loan lifecycle, from the borrower's \"point of thought\" through e-Close and secondary solutions. Drive real automation that reduces manual workflows, increases productivity, and decreases risk. You will be working in a dynamic product development team while collaborating with other developers, management, and customer support teams. You will have an opportunity to participate in designing and developing services utilized across product lines. The ideal candidate should possess a product mentality, have a strong sense of ownership, and strive to be a good steward of his or her software. More than any concrete experience with specific technology, it is critical for the candidate to have a strong sense of what constitutes good software; be thoughtful and deliberate in picking the right technology stack; and be always open-minded to learn (from others and from failures). Responsibilities Develop high quality data processing infrastructure and scalable services that are capable of ingesting and transforming data at huge scale coming from many different sources on schedule. Turn ideas and concepts into carefully designed and well-authored quality code. Articulate the interdependencies and the impact of the design choices. Develop APIs to power data driven products and external APIs consumed by internal and external customers of data platform. Collaborate with QA, product management, engineering, UX to achieve well groomed, predictable results. Improve and develop new engineering processes & tools. Knowledge and Experience 3+ years of building Enterprise Software Products. Experience in object-oriented design and development with languages such as Java. J2EE and related frameworks. Experience building REST based micro services in a distributed architecture along with any cloud technologies. (AWS preferred) Knowledge in Java/J2EE frameworks like Spring Boot, Microservice, JPA, JDBC and related frameworks is must. Built high throughput real-time and batch data processing pipelines using Kafka, on AWS environment with AWS services like S3, Kinesis, Lamdba, RDS, DynamoDB or Redshift. (Should know basics at least) Experience with a variety of data stores for unstructured and columnar data as well as traditional database systems, for example, MySQL, Postgres Proven ability to deliver working solutions on time Strong analytical thinking to tackle challenging engineering problems. Great energy and enthusiasm with a positive, collaborative working style, clear communication and writing skills. Experience with working in DevOps environment you build it, you run it Demonstrated ability to set priorities and work in a fast-paced, dynamic team environment within a start-up culture. Experience with big data technologies and exposure to Hadoop, Spark, AWS Glue, AWS EMR etc (Nice to have) Experience with handling large data sets using technologies like HDFS, S3, Avro and Parquet (Nice to have)
Posted 2 days ago
10.0 - 15.0 years
14 - 19 Lacs
kochi
Work from Office
Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience inAmazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Amazon Redshift is a popular data warehousing solution that is widely used by companies in India. Job opportunities for Amazon Redshift professionals are on the rise as more organizations are adopting this technology to manage and analyze their data efficiently. If you are a job seeker looking to explore opportunities in this field, here is a guide to help you navigate the Amazon Redshift job market in India.
These cities are known for their growing tech industries and have a high demand for Amazon Redshift professionals.
The average salary range for Amazon Redshift professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Amazon Redshift may include roles such as Junior Developer, Senior Developer, Tech Lead, and Architect. As you gain more experience and expertise in Amazon Redshift, you can progress to higher roles with greater responsibilities.
In addition to expertise in Amazon Redshift, professionals in this field are often expected to have knowledge of SQL, ETL tools, data modeling, and cloud computing platforms such as AWS.
As you prepare for Amazon Redshift job interviews, make sure to brush up on your technical skills and knowledge of data warehousing concepts. With the right preparation and confidence, you can land a rewarding career in Amazon Redshift in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |