Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 9 years
12 - 22 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
AWS Data Engineer To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 JOB Profile: Significant 5 to 9 years of experience in designing and implementing scalable data engineering solutions on AWS. Strong proficiency in Python programming language. Expertise in serverless architecture and AWS services such as Lambda, Glue, Redshift, Kinesis, SNS, SQS, and CloudFormation. Experience with Infrastructure as Code (IaC) using AWS CDK for defining and provisioning AWS resources. Proven leadership skills with the ability to mentor and guide junior team members. Excellent understanding of data modeling concepts and experience with tools like ERStudio. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Apache Airflow for orchestrating data pipelines is a plus. Knowledge of Data Lakehouse, dbt, or Apache Hudi data format is a plus. Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Redshift. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Desired Candidate Profile 5-9 years of experience in an IT industry setting with expertise in Python programming language (Pyspark). Strong understanding of AWS ecosystem including S3, Glue, Lambda, Redshift. Bachelor's degree in Any Specialization (B.Tech/B.E.).
Posted 1 month ago
5 - 8 years
5 - 9 Lacs
Bengaluru
Hybrid
Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards
Posted 1 month ago
5 - 10 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
To complement the existing cross-functional team, Zensar is looking for a Data Engineer who will assist in designing and also implement scalable and robust processes to support the data engineering capability. This role will be responsible for implementing and supporting large-scale data ecosystems across the Group. This incumbent will use best practices in cloud engineering, data management and data storage to continue our drive to optimize the way that data is stored, consumed and ultimately democratized. The incumbent will also engage with stakeholders across the organization with use of the Data Engineering practices to facilitate the improvement in the way that data is stored and consumed. Role & responsibilities Assist in designing and implementing scalable and robust processes for ingesting and transforming complex datasets. Designs, develops, constructs, maintains and supports data pipelines for ETL from a multitude of sources. Creates blueprints for data management systems to centralize, protect, and maintain data sources. Focused on data stewardship and curation, the data engineer enables the data scientist to run their models and analyses to achieve the desired business outcomes Ingest large, complex data sets that meet functional and non-functional requirements. Enable the business to solve the problem of working with large volumes of data in diverse formats, and in doing so, enable innovative solutions. Design and build bulk and delta data lift patterns for optimal extraction, transformation, and loading of data. Supports the organisations cloud strategy and aligns to the data achitecture and governance including the implementation of these data governance practices. Engineer data in the appropriate formats for downstream customers, risk and product analytics or enterprise applications. Assist in identifying, designing and implementing robust process improvement activities to drive efficiency and automation for greater scalability. This includes looking at new solutions and new ways of working and being on the forefront of emerging technologies. Work with various stakeholders across the organization to understand data requirements and apply technical knowledge of data management to solve key business problems. Provide support in the operational environment with all relevant support teams for data services. Provide input into the management of demand across the various data streams and use cases. Create and maintain functional requirements and system specifications in support of data architecture and detailed design specifications for current and future designs. Support test and deployment of new services and features. Provides technical leadership to junior data engineers in the team Preferred candidate profile A degree in Computer Science, Business Informatics, Mathematics, Statistics, Physics or Engineering. 3+ years of data engineering experience 3+ years of experience with any data warehouse technical architectures, ETL/ELT, and reporting/analytics tools including , but not limited to , any of the following combinations (1) SSIS ,SSRS or something similar (2) ETL Frameworks, (3) Spark (4) AWS data builds Should be at least at a proficient level in at least one of Python or Java Some experience with R, AWS, XML, json, cron will be beneficial Experience with designing and implementing Cloud (AWS) solutions including use of APIs available. Knowledge of Engineering and Operational Excellence using standard methodologies. Best practices in software engineering, data management, data storage, data computing and distributed systems to solve business problems with data.
Posted 1 month ago
6 - 10 years
15 - 27 Lacs
Noida, Hyderabad, Bengaluru
Work from Office
Job Description: 1. Candidate should have good experience in all the functionalities of Dataiku 2. Should have previous exposure handling large data sets using Dataiku and preparing and calculating data. 3. Should be able to write queries to extract and connect from RDBSM/Data lake and any other manual datasets 4. Most importantly, should be able to understand existing developments and take over with minimal handover. 5. Must be expert in Excel as well given all of the information produced in mostly furnished in excel at the right level of detail to the stakeholders for validation and discussions 6. Must have a eye for accuracy ensuring the flows are robust. 7. Banking process knowledge is a good to have Note: Kindly go through the JD and apply accordingly, its for PAN India Hiring
Posted 1 month ago
6 - 10 years
15 - 22 Lacs
Noida, Hyderabad, Bengaluru
Work from Office
AWS Data Engineer with hands-on experience in Amazon Redshift and EMR, responsible for building scalable data pipelines and managing big data processing workloads. The role requires strong skills in Spark, Hive, S3 on AWS cloud infrastructure.
Posted 1 month ago
3 - 8 years
3 - 8 Lacs
Hyderabad
Work from Office
Name of Organization: Jarus Technologies (India) Pvt. Ltd. Organization Website: www.jarustech.com Position: Senior Software Engineer - Data warehouse Domain Knowledge: Insurance (Mandatory) Job Type: Permanent Location: Hyderabad - IDA Cherlapally, ECIL and Divyasree Trinity, Hi-Tech City. Experience: 3+ years Education: B. E. / B. Tech. / M. C. A. Resource Availability: Immediately or a maximum period of 30 days. Technical Skills: • Strong knowledge of data warehousing concepts and technologies. • Proficiency in SQL and other database languages. • Experience with ETL tools (e.g., Informatica, Talend, SSIS). • Familiarity with data modelling techniques. • Experience in building dimensional data modelling objects, dimensions, and facts. • Experience with cloud-based data warehouse platforms (e.g., AWS Redshift, Azure Synapse, Google Big Query). • Familiar with optimizing SQL queries and improving ETL processes for better performance. • Knowledge of data transformation, cleansing, and validation techniques. Experience with incremental loads, change data capture (CDC) and data scheduling. • • Comfortable with version control systems like GIT. • Familiar with BI tools like Power BI for visualization and reporting. Responsibilities: Design, develop and maintain data warehouse systems and ETL (Extract, Transform, Load) processes. • • Develop and optimize data models and schemas to support business needs. • Design and implement data warehouse architectures, including physical and logical designs. • Design and develop dimensions, facts and bridges. • Ensure data quality and integrity throughout the ETL process. • Design and implement relational and multidimensional database structures. • Understand data structures and fundamental design principles of data warehouses. • Analyze and modify data structures to adapt them to business needs. • Identify and resolve data quality issues and data warehouse problems. • Debug ETL processes and data warehouse queries. Communication skills: • Good communication skills to interact with customer • Ability to understand requirements for implementing an insurance warehouse system
Posted 1 month ago
6 - 10 years
15 - 20 Lacs
Gurugram
Remote
Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.
Posted 1 month ago
8 - 13 years
30 - 40 Lacs
Bengaluru
Hybrid
Key Responsibilities: Develop & Optimize Data Pipelines Architect, build, and enhance scalable data pipelines for high-performance processing. Troubleshoot & Sustain Identify, diagnose, and resolve data pipeline issues to ensure operational efficiency. Data Architecture & Storage Design efficient data storage and retrieval strategies using Postgres, Redshift, and other databases. CI/CD Pipeline Management Implement and maintain continuous integration and deployment strategies for smooth workflow automation. Scalability & Performance Tuning Ensure the robustness of data solutions while optimizing performance at scale. Collaboration & Leadership Work closely with cross-functional teams to ensure seamless data flow and lead engineering best practices. Security & Reliability – Establish governance protocols and ensure data integrity across all pipelines. Technical Skills Required: Programming: Expert in Python and Scala Big Data Technologies: Proficient in Spark, Kafka DevOps & Cloud Infrastructure: Strong understanding of Kubernetes SQL & Database Management: Skilled in SQL administration, Postgres, Redshift CI/CD Implementation: Experience in automating deployment processes for efficient workflow Job Location Bangalore Notice Period: Immediate to 15 days. Interested candidates can share your profiles t o marygracy.antony@ilink-systems.com
Posted 1 month ago
5 - 10 years
20 - 35 Lacs
Bengaluru, Hyderabad, Mumbai (All Areas)
Hybrid
We are looking for a Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days SQL Python ETL - AirFlow (Optional) NoSQL - BigTable (GCP) not mandatory Storage GCS or S3 (Compute Google Compute Engine, Amazon EC2) BigQuery (GCP) or RedShift (AWS) - (cloud DWHs)
Posted 2 months ago
5 - 9 years
0 Lacs
Nagpur
Work from Office
Role & responsibilities Job Role- AWS Data Engineer(L2/L3) Experience-5+ years Location-Nagpur 5+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts.
Posted 2 months ago
6 - 10 years
18 - 30 Lacs
Bengaluru
Remote
Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Exp Data engineering and data pipeline development. AWS services, especially Redshift, Glue, S3, and Athena. Apache Iceberg or formats (like Delta Lake or Hudi). Required Candidate profile Legacy tools like Siebel, Talend, and Informatica Data governance tools like Atlan, Collibra, or Alation. Data quality checks using Soda or equivalent. Strong SQL and Python skills, Spark
Posted 2 months ago
8 - 10 years
5 - 15 Lacs
Bengaluru
Remote
Lead & mentor a team of data engineers Architect, develop, & optimize scalable ETL/ELT pipelines using Apache Spark, Hive, AWS Glue, and Trino Build and maintain cloud-based data solutions using AWS services Data Governance & Quality
Posted 2 months ago
6 - 11 years
9 - 15 Lacs
Chennai
Work from Office
We are seeking an experienced AWS Data Engineer with expertise in Databricks to design, build, and optimize large-scale data pipelines and data processing workflows on the AWS Cloud. The ideal candidate will have hands-on experience working with Databricks, big data technologies, and AWSnative services, ensuring efficient data ingestion, transformation, and analytics to support businesscritical decisions. Key Responsibilities: Data Pipeline Development: Design, implement, and manage scalable ETL/ELT pipelines using AWS services and Databricks. Data Integration: Ingest and process structured, semi-structured, and unstructured data from multiple sources into AWS Data Lake or Databricks. Data Transformation: Develop advanced data processing workflows using PySpark, Databricks SQL, or Scala to enable analytics and reporting. Databricks Management: Configure and optimize Databricks clusters, notebooks, and jobs for performance and cost efficiency. AWS Architecture: Design and implement solutions leveraging AWS-native services like S3, Glue, Redshift, EMR, Lambda, Kinesis, and Athena. Collaboration: Work closely with Data Analysts, Data Scientists, and other Engineers to understand business requirements and deliver data-driven solutions. Performance Tuning: Optimize data pipelines, storage, and queries for performance, scalability, and reliability. Monitoring and Security: Ensure data pipelines are secure, robust, and monitored using CloudWatch, Datadog, or equivalent tools. Documentation: Maintain clear and concise documentation for data pipelines, workflows, and architecture. Required Skills & Qualifications: Data Engineering Expertise: 6+ years of experience in data engineering with at least 2+ years working on Databricks. AWS Cloud Services: Hands-on experience with AWS ecosystem, including S3, Glue, Redshift, DynamoDB, Lambda, and other AWS data services. Programming Languages: Proficiency in Python (PySpark), Scala, or SQL for data processing and transformation. Databricks: Extensive experience with Databricks Workspace, Delta Lake, and managing databricks jobs and pipelines. Big Data Frameworks: Strong knowledge of Apache Spark for distributed data processing. Data Warehousing: Experience with modern data warehouse solutions, including Redshift, Snowflake, or Databricks SQL. Version Control & CI/CD: Familiarity with Git, Terraform, and CI/CD pipelines for deploying data solutions. Monitoring & Debugging: Experience with tools like CloudWatch, Datadog, or equivalent for pipeline monitoring and troubleshooting. Preferred Qualifications: Certification in AWS Data Analytics or Databricks. Experience with real-time data streaming tools like Kafka, Kinesis, or AWS MSK. Knowledge of data governance and data security best practices. Exposure to machine learning workflows and integration with Databrick
Posted 2 months ago
7 - 10 years
20 - 25 Lacs
Bengaluru
Work from Office
Role & responsibilities This is a strong technology and solution delivery role, accountable for the successful design, development, and delivery of Analytics solutions integrated with the corporate Data Platform not only for self, also for a team of developers working on specific projects. Perform data analysis, design Analytics Dashboards architecture and deliver the same in alignment with Global Platform standards and guidelines Interact with customers to understand their business problems and provide best-in-class analytics solutions Interact with Global Data Platform leaders and understand data flows that integrate into Tableau/analytics Understand data governance, quality and security and integrate analytics with these corporate platforms Interact with UX/UI global functions and design best in class visualization for customers harnessing all product capabilities Demonstrate strength in data modelling, ETL development, and data warehousing Proficient in SQL and Query performance tuning skills Should have worked on Data mining and reporting systems. Should be able to develop solutions using Tableau to meet enterprise level requirements. Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets Hands on experience in ETL, Tableau, SQL, Advanced Excel Knowledge of leading large-scale data warehousing and analytics projects using AWS technologies Redshift, Athena, S3, EC2 and other big data technologies Strong verbal and written communication skills, with the ability to work effectively across internal and external organizations. Strong tableau experience on enterprise level data set. Must have a working knowledge of different types of charts, tables, filters, calculated fields, parameters, functions, blending, LODs, etc. in Tableau Ability to build medium to complex interactive dashboards using a different type of data sources in Tableau Strong analytical & problem-solving skills Strong verbal and business communication skills Skill in identifying data issues and anomalies during the analysis Strong business acumen & demonstrated an aptitude for analytics that incite action Preferred candidate profile 7 years of experience in Analytics Development on Tableau, ClickSense, Thotspot and other analytics products and integration with Data platforms Minimum 5 years of working experience with Tableau using Redshift/AWS and data modelling skills will be added advantage Need to be strong in SQL query writing Finance, Opportunity, Sales and Marketing domain knowledge added advantage Strong analytical skills and enjoys solving complex technical problems.
Posted 2 months ago
4 - 6 years
6 - 13 Lacs
Hyderabad
Work from Office
Description Roles and responsibilities Design AWS architectures based on business requirements. Create architectural diagrams and documentation. Present cloud solutions to stakeholders. Skills and Qualifications: Design, develop, and maintain scalable ETL/ELT pipelines using AWS services like Glue, Lambda, and Step Functions. Work with batch and real-time data processing using AWS Glue, Kinesis, Kafka, or Apache Spark. Optimize data pipelines for performance, scalability, and cost-effectiveness. Identify bottlenecks and optimize query performance on Redshift, Athena, and Glue. Strong knowledge of AWS services: EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, CloudWatch, etc. Experience with serverless architectures (AWS Lambda, API Gateway, Step Functions). Experience of AWS networking (VPC, Route 53, ELB, Security Groups, etc.). Experience with AWS CloudFormation for automating infrastructure. Proficiency in scripting languages such as Python or Bash. Experience with automation tools (AWS Systems Manager, AWS Lambda) Experience of containerization (Docker, Kubernetes, AWS ECS, EKS, Fargate). Experience with AWS CloudWatch, AWS X-Ray, ELK Stack, or third-party monitoring tools. Experience with AWS database services (RDS, DynamoDB, Aurora, Redshift). Experience of storage solutions (S3, EBS, EFS, Glacier). Experience of AWS Direct Connect, Transit Gateway, and VPN solutions.
Posted 2 months ago
1 - 5 years
6 - 15 Lacs
Bengaluru
Work from Office
We're Hiring: AWS Data Engineer (6 months- 5 years Experience) Location: Bangalore Experience Required:6 months- 5 years Are you passionate about cloud technologies and working with big data? We are looking for a skilled AWS Data Engineer to join our team in Bangalore. If you have experience in designing, building, and optimizing data pipelines using AWS cloud technologies, we want to hear from you! Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using AWS technologies. Work with PySpark and SQL to process large datasets efficiently. Manage AWS services like Redshift, EMR, Airflow, CloudWatch, and S3 for data processing and orchestration. Implement CI/CD pipelines using Azure DevOps. Monitor and optimize data workflows for performance, cost, and reliability. Collaborate with cross-functional teams to ensure smooth data integration. Required Skills: Programming & Scripting: SQL, PySpark, Python AWS Tools & Services: Apache Airflow, Redshift, EMR, CloudWatch, S3, Jupyter Notebooks DevOps & CI/CD: Azure DevOps, Git, Unix Commands Preferred Qualifications: Experience in performance tuning data pipelines and SQL queries. Knowledge of data lake and data warehouse architecture. Strong problem-solving skills. Understanding of data security, encryption, and cloud access control. This is a fantastic opportunity to work on cutting-edge data technologies in a dynamic, growing team. If you're a tech enthusiast who thrives in a collaborative environment, apply now! Apply Now or send your resume to career@ahanait.com/amruthavarshini.kn@ahanait.com/sandhya.yashasvi@ahanait.com Or reach out to us on 9845222775/ 9845267997 / 7760957879
Posted 2 months ago
1 - 6 years
3 - 6 Lacs
Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)
Work from Office
JOB DESCRIPTION DATA ENGINEER Department: Technology Location: Mumbai - Lower Parel Employment Type: Internship / Full time Roles & Responsibilities Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data- related technical issues and support their data infrastructure needs. Key Skills / Requirements We are looking for a candidate in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Advanced SQL knowledge and knowledge in relational/document databases, query authoring (SQL) as well as working familiarity with a variety of databases. Designing and implementing efficient and scalable data pipelines to extract, transform, and load (ETL) data from various sources. Building and maintaining robust data architectures, including databases and data warehouses, to ensure data availability and integrity. Strong analytic skills related to working with unstructured datasets. Knowledge in AWS cloud services: EC2, EMR, RDS, Redshift Knowledge in Python is must Benefits Competitive salary packages and bonuses. Mediclaim plans for you and your dependents
Posted 2 months ago
3 - 6 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Description: We are looking for a SQL Optimization & Data Engineering Specialist with expertise in SQL performance tuning, data migration, and AWS Redshift . The ideal candidate should have a deep understanding of optimizing complex SQL queries, working with large-scale data pipelines, and implementing Redshift-based data solutions. Key Responsibilities: Analyse and optimize complex SQL queries to enhance performance. Identify and resolve slow-performing queries, indexing issues, and deadlocks. Improve database performance through tuning, partitioning, and caching techniques. Optimize complex SQL queries for performance improvements in AWS Redshift . Design and implement data migration strategies. Monitor and optimize Redshift query execution plans for efficiency. Required Skills & Qualifications: Strong experience in SQL query optimization, performance tuning, and database indexing . Hands-on experience with AWS Redshift , including cluster management and query optimization. Experience in data migration from traditional relational databases to Redshift. Familiarity with AWS services like S3, Lambda, CloudWatch, and IAM for data workflows.
Posted 2 months ago
3 - 7 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a skilled SQL Developer with expertise in AWS Redshift to join our data team. The ideal candidate will design, develop, and optimize complex SQL queries, manage ETL pipelines, and ensure the efficiency of data warehousing solutions in AWS. This role requires strong SQL skills, experience with AWS services, and the ability to work with large datasets. Key Responsibilities: Develop, optimize, and maintain complex SQL queries for AWS Redshift. Design and manage ETL pipelines for data ingestion, transformation, and storage. Ensure data integrity, performance tuning, and query optimization in Redshift. Work with AWS services like S3, Lambda, Glue, and Athena for data processing. Collaborate with data engineers, analysts, and business teams to deliver scalable data solutions. Monitor and troubleshoot Redshift performance issues and implement best practices. Automate data workflows and optimize storage solutions in AWS. Develop and maintain Redshift schemas, tables, views, and stored procedures . Required Skills & Qualifications: 3-5+ years of experience in SQL development and database management. Strong expertise in AWS Redshift , including performance tuning and optimization. Hands-on experience with ETL tools (AWS Glue, Apache Airflow, or similar). Proficiency in AWS services like S3, Lambda, and CloudWatch. Strong understanding of data modeling, warehousing, and schema design . Experience working with large datasets in a cloud-based environment. Knowledge of Python or Shell scripting for automation is a plus. Familiarity with BI tools (Tableau, Power BI, or QuickSight) is a bonus. Strong analytical and problem-solving skills. Preferred Qualifications: AWS certifications (AWS Certified Data Analytics, AWS Solutions Architect). Experience with Snowflake or BigQuery is a plus. Familiarity with DevOps practices and CI/CD for data pipelines. Why Join Us? Work with cutting-edge cloud data technologies . Collaborative and dynamic work environment. Competitive salary, benefits, and career growth opportunities.
Posted 2 months ago
12 - 16 years
30 - 45 Lacs
Pune, Bengaluru
Hybrid
Role: AWS Cloud Architect Location: Pune & Bengaluru Fulltime Key Responsibilities • Develop and maintain scalable and reliable data pipelines to ingest data from various APIs into the AWS ecosystem. • Utilize AWS Redshift for data warehousing tasks, optimizing data retrieval and query performance. • Configure and use AWS Glue for ETL processes, ensuring data is clean, well-structured, and ready for analysis. • Utilize EC2 instances for custom applications and services that require compute capacity. • Implement data lake and warehousing strategies to support analytics and business intelligence initiatives. • Collaborate with cross-functional teams to understand data needs and deliver solutions that align with business goals. • Ensure compliance with data governance and security policies. Qualifications • A solid experience in AWS services, especially S3, Redshift, Glue, and EC2. • Proficiency in data ingestion and integration, particularly with APIs. • A strong understanding of data warehousing, ETL processes, and cloud data storage. • Experience with scripting languages such as Python for automation and data manipulation. • Familiarity with infrastructure as code tools for managing AWS resources. • Excellent problem-solving skills and ability to work in a dynamic environment. • Strong communication skills for effective collaboration and documentation.
Posted 2 months ago
4 - 8 years
6 - 16 Lacs
Bengaluru
Work from Office
4 +yrs of exp as Data Engineer Exp in AWS Cloud Services, EC2, S3, IAM Exp on AWS Glue, DMS, RDBMS, MPP Databases like Snowflake, Redshift Knowledge on Data Modelling, ETL Process This role will be 5 days WFO. Plz apply only if you are open to work from office Only immediate joiners required
Posted 2 months ago
5 - 9 years
15 - 30 Lacs
Noida
Remote
Overview A Cloud Engineer is responsible for designing, building, and maintaining cloud-based infrastructure and processes to support data and application solutions. The role emphasizes implementing governance, best practices, and security measures while optimizing cloud costs. This individual will leverage Infrastructure as Code (IaC) and Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure scalability, efficiency, and security. They will work as part of the platform team in close collaboration with various groups, including data governance and data engineering. Roles and Responsibilities: Collaborate closely with data teams to support the development and deployment of innovative and efficient data solutions. Respond to and fulfill platform requests from various NFL data teams, including internal business stakeholders, data analytics professionals, data engineers, and quality assurance teams. Required Skill Sets: Hands-on experience with AWS, including familiarity working with the following services: Analytics: Athena Glue Redshift Application Integration: EventBridge MWAA SNS SQS Compute: EC2 Lambda Containers: ECR ECS Database: DynamoDB RDS Developer Tools: CDK CloudFormation CodeBuild CodeCommit CodePipeline Well-versed in the core principles of the AWS Well-Architected Framework, encompassing its five foundational pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. Strong problem-solving abilities and a passion for tackling complex challenges collaboratively and independently. Proactive, detail-oriented self-starter with excellent organizational skills. Exceptional communication skills, with the ability to present findings effectively to both technical and non-technical audiences. Previous experience in AWS infrastructure operations, including monitoring, troubleshooting & supporting cross functional teams. Proficiency in optimizing AWS resources to enhance performance and scalability while reducing costs. Working knowledge of CI/CD pipelines and build/test/deploy automation tools. Experience in environments leveraging IaC tools (e.g., CDK & AWS CloudFormation). Proficient in Python, with intermediate-level expertise in developing scripts, automating workflows, and implementing solutions using AWS Cloud Development Kit (CDK) and boto3 Extensive expertise in implementing security best practices, including the design and management of AWS IAM policies, encryption using AWS Key Management Service (KMS), and robust data protection mechanisms. Adept at adhering to the principle of least privilege to ensure secure and efficient access control. Active participation in defining cloud strategies and evaluating emerging technologies and AWS services. Proficient in working within agile environments, with a strong understanding of agile methodologies. Experienced in utilizing JIRA to manage backlogs and sprints, as well as leveraging JIRA Service Management to support ticketing and operational workflows.
Posted 2 months ago
4 - 8 years
20 - 32 Lacs
Bengaluru, Hyderabad, Mumbai (All Areas)
Hybrid
We are looking for a Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days SQL BigQuery (GCP) or RedShift (AWS) - (cloud DWHs) Python ETL - AirFlow (Optional) NoSQL - BigTable (GCP) not mandatory Storage GCS or S3 (Compute – Google Compute Engine, Amazon EC2)
Posted 2 months ago
12 - 18 years
40 - 45 Lacs
Chennai, Bengaluru
Work from Office
Experience in Hadoop, GCP/AWS/Azure Cloud ETL technologies on Cloud like Spark, Pyspark/Scala, Dataflow ETL tools like Informatica/DataStage/OWB/Talend Experience inS3, Cloud Storage, Athena, Glue, Sqoop, Flume, Hive, Kafka, Pub-Sub Required Candidate profile More than 10 years of experience in Technical, Solutioning, and Analytical roles. 5+ years of experience in building and managing Data Lakes, Data Warehouse, Data Integration,
Posted 2 months ago
8 - 11 years
15 - 25 Lacs
Gurgaon
Work from Office
Data Engineer with ETL Development Data Warehouse Data Modeling Data Integration Pyspark Hadoop AWS As MANDATORY SKILLS Requires Immediate Joiner or serving notice with in 30 days.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2