Jobs
Interviews

745 Amazon Redshift Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

10 - 18 Lacs

Kanpur

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS ,Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL,Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance)

Posted 2 months ago

Apply

2.0 - 7.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level:Analyst Location:Bangalore/ Kolkata/Hyderabad Must have skills: Data engineering with Python or pyspark Good to have skills: Gen AI Job Summary : We are seeking a highly skilled and motivated Data Science Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech, Enterprise Tech, and Semiconductors. This role combines designing, building, and maintaining scalable data pipelines and infrastructure, and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Generative AI Expertise Develop fine-tune models for NLP, Computer Vision, and multimodal applications, leveraging GenAI frameworks. Design and implement evaluation strategies to optimize model performance (e.g., BLEU, ROUGE, FID). Architect deployment solutions, including API development and seamless integration with existing systems. Data Science and Engineering Design, build, and maintain robust, scalable, and efficient data pipelines (ETL/ELT). Work with structured and unstructured data across a wide variety of data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data systems and architecture for performance, scalability, and reliability. Monitor data quality and support initiatives to ensure clean, accurate, and consistent data. Develop and maintain data models and metadata. Implement and maintain best practices in data governance, security, and compliance. Required Qualifications Experience: 2+ years in data engineering/data science Education: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Technical Skills Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Strong programming skills in Python, Scala, or Java. Language & Frameworks - Python, SQL, scikit-learn, TensorFlow or PyTorch Data Tools:Pandas, NumPy, Matplotlib, Seaborn Orchestration - DBT, Apache Airflow GenAI & LLM Tooling :LangChain, LlamaIndex, Hugging Face Transformers, vector databases (e.g., FAISS, Pinecone) Good knowledge of ML Ops best practices and processes. Experience with big data technologies such as Spark or Hive. Familiarity with cloud platforms like AWS, Azure, or GCP, especially services like S3, Redshift, BigQuery, or Azure Data Lake. Experience with orchestration tools like Airflow, Luigi, or similar. Solid understanding of data warehousing concepts and data modelling techniques. Good problem-solving skills and attention to detail. Preferred Skills Experience with modern data stack tools like dbt, Snowflake, or Databricks. Knowledge of CI/CD pipelines and version control (e.g., Git). Exposure to containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation). Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred), Kolkata and Hyderabad office. About Our Company | Accenture Qualification Experience: 2+ years in data engineering and/or data science Educational Qualification:B tech, M tech in Computer Science, Statistics, Applied Mathematics, Engineering or related field

Posted 2 months ago

Apply

12.0 - 15.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : Data Insights & Visualization Practition Project Role Description : Create interactive interfaces that enable humans to understand, interpret, and communicate complex data and insights. Wrangle, analyze, and prepare data to ensure delivery of relevant, consistent, timely, and actionable insights. Leverage modern business intelligence, storytelling, and web-based visualization tools to create interactive dashboards, reports and emerging VIS/BI artifacts. Use and customize (Gen)AI and AI-powered VIS/BI capabilities to enable a dialog with data. Must have skills : Data Analytics Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Insights & Visualization Practitioner, your typical day involves creating interactive interfaces that facilitate the understanding, interpretation, and communication of complex data and insights. You will engage in data wrangling, analysis, and preparation to ensure the delivery of relevant, consistent, timely, and actionable insights. Your role will also include leveraging modern business intelligence, storytelling, and web-based visualization tools to develop interactive dashboards, reports, and emerging visualization and business intelligence artifacts. Additionally, you will utilize and customize generative artificial intelligence and AI-powered visualization and business intelligence capabilities to foster a dialog with data, enhancing the overall data experience for users. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate training sessions to enhance team members' understanding of data visualization tools and techniques.- Develop and maintain documentation for processes and best practices related to data insights and visualization. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics.- Strong experience with data visualization tools such as Tableau, Power BI, or similar platforms.- Expertise in data wrangling and preparation techniques to ensure data quality.- Ability to create compelling narratives through data storytelling.- Familiarity with generative artificial intelligence and its application in data visualization. Additional Information:- The candidate should have minimum 12 years of experience in Data Analytics.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication among team members and stakeholders. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure effective communication among team members and stakeholders- Implement best practices for application design and configuration Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Knowledge of data warehousing concepts- Hands-on experience with AWS services such as S3, Lambda, and Redshift Additional Information:- The candidate should have a minimum of 5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Hands-on experience in designing and implementing scalable applications- Knowledge of data warehousing concepts Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the decision-making process. Your role will require a balance of technical expertise and leadership skills to drive project success and foster a collaborative team environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with data warehousing concepts and best practices.- Ability to troubleshoot and optimize data workflows. Additional Information:- The candidate should have minimum 7.5 years of experience in AWS Glue.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 12 Lacs

Mumbai, Bengaluru

Work from Office

Role Responsibilities Design, develop, and maintain the organization's business intelligence and reporting dashboards. Design, develop, and maintain data models and feeds for report development. Prepare insights based on data and reporting solutions. Collaborate with business stakeholders to understand reporting and analysis needs and translate them into technical requirements. Ensure that data is accurate and easily accessible for reporting and analysis. Develop and maintain persona-based dashboards, management reports and executive views to provide business insights. Experienced in Power BI to understand business needs and deliver technical solutions. Follow, learn, and share best practices for Power BI development. Ensure compliance with security and data privacy policies and regulations. Manage and prioritize workload and identify opportunities to streamline processes and improve efficiency. Keep up to date with industry trends and emerging technologies and evaluate their potential impact on business intelligence and reporting. Qualifications & Skills Bachelor's degree in Computer Science, Information Systems, or a related field Master of Business Administration preferred. At least 4 years of experience in business intelligence and reporting, with a focus on data visualization, dashboards, and reporting. Demonstrated experience with Microsoft Power BI, with proficiency in complex DAX, Power Query, tabular data modeling, visualizations, RLS, composite modeling, Power BI Service, Licensing, etc. Strong understanding of database systems, analytical processing, data modeling and ETL/DWH framework. Worked in advanced SQL and database technologies such as Microsoft SQL Server, Azure SQL Server, Oracle, and MySQL, and Amazon Redshift SQL. Experience with cloud-based data storage and processing technologies such as AWS or Azure Data Factory is a plus. Experience in end-to-end implementation projects as well as maintenance & support projects is a plus. Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Experienced across requirement analysis, solutioning, designing, modeling, implementation, testing, end-user communication. Work in diverse environments - geographical, cultural, and business is preferred. Ability to adapt to changing reporting requirements, priorities, and project timelines. Experience in Banking or Financial Services industry is a plus. Experience in Agile methodologies is preferred. Experience in Microsoft Excel, SharePoint Online, PowerAutomate, Draw.io, Power Apps, Administration, Governance, REST API, PowerShell, JavaScript, DAX Studio, Tabular Editor, Python, R programming, is a plus. Certification in Microsoft Power BI or SQL is a plus. Must Have: SQL (Minimum 3 4 years of hands-on experience), DAX, Power BI, Power Query, M Query. Location - Remote, Hyderabad,Ahmedabad,pune,chennai,kolkata. Mandatory Key Skills DAX,Power Query,data modeling,ETL,Data warehousing,Oracle,MySQL,Amazon Redshift,AWS,Azure Data Factory,Microsoft Excel,SharePoint Online,Power Automate,Power Apps,Business Intelligence*,Business reporting*,data visualization*,SQL*,Power BI*

Posted 2 months ago

Apply

4.0 - 8.0 years

15 - 27 Lacs

Chennai

Work from Office

We're Nagarro , We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! REQUIREMENTS: Total experience 4+ years. Excellent knowledge and experience in Big data engineer. Strong experience with AWS services , especially S3, Glue, Athena, and EMR. Hands-on programming experience in Python, Spark, SQL, and Talend . Proficiency in working with data warehouses such as Amazon Redshift and Snowflake . Experience handling structured and semi-structured data. Strong understanding of ETL/ELT processes and data transformation techniques. Proven experience in cross-functional collaboration with technical and business teams. Familiarity with data modeling, data warehousing, and building distributed systems. Expertise in Spanner for high-availability, scalable database solutions. Knowledge of data governance and security practices in cloud-based environments. Problem-solving mindset with the ability to tackle complex data engineering challenges. Strong communication and teamwork skills, with the ability to mentor and collaborate effectively. Experience with creating technical documentation and solution designs. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them in to technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the clients requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements

Posted 2 months ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities As a Tech Lead, the candidate should be able to work as an individual contributor as well as people manager Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Be able to lead the team and have to soft skills for the same Be able to review code, design and mentor the team members Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Proven ability to work on big data technology stack on cloud and on-prem Proven ability to communicate effectively with the team Proven ability to lead and mentor the team Proven ability to have soft skills for people management

Posted 2 months ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Noida

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience with data analytics tools like Tableau, Power BI, or similar Experience in optimizing data processing workflows for performance and cost-efficiency Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA Proven ability to develop and implement scripts to maintain and monitor performance tuning Proven ability to design scalable job scheduler solutions and advises on appropriate tools/technologies to use Proven ability to work across multiple domains to define and build data models Proven ability to understand all the connected technology services and their impacts Proven ability to assess design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility

Posted 2 months ago

Apply

6.0 - 11.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Optum Technology Digital team is on a mission to disrupt the healthcare industry, transforming UHG into an industry-leading Consumer brand. We deliver hyper-personalized digital solutions that empower direct-to-consumer, digital-first experiences, educating, guiding, and empowering consumers to access the right care at the right time. Our mission is to revolutionize healthcare for patients and providers by delivering cutting-edge, personalized and conversational digital solutions. We’re Consumer Obsessed, ensuring they receive exceptional support throughout their healthcare journeys. As we drive this transformation, we're revolutionizing customer interactions with the healthcare system, leveraging AI, cloud computing, and other disruptive technologies to tackle complex challenges. Serving UnitedHealth Group's digital technology needs, the Consumer Engineering team impacts millions of lives through UnitedHealthcare & Optum. We are seeking a dynamic individual who embodies modern engineering culture - someone with deep engineering expertise within a digital product model, a passion for innovation, and a relentless drive to enhance the consumer experience. Our ideal candidate thrives in an agile, fast-paced rapid-prototyping environment, embraces DevOps and continuous integration/continuous deployment (CI/CD) practices, and champions the Voice of the Customer. If you are driven by the pursuit of excellence, eager to innovate, and excited to make a tangible impact within a team that embraces modern technologies and consumer-centric strategies, while prioritizing robust cyber-security protocols, we invite you to explore this exciting opportunity with us. Join our team and be at the forefront of shaping the future of healthcare, where your unique skills will not only be recognized but celebrated. Primary Responsibilities Design and implement data models to analyse business, system, and security events for real-time insights and threat detection Conduct exploratory data analysis (EDA) to understand patterns and relationships across large data sets, and develop hypotheses for new model development Develop dashboards and reports to present actionable insights to business and security teams Build and automate near real-time analytics workflows on AWS, leveraging services like Kinesis, Glue, Redshift, and QuickSight Collaborate with AI/ML engineers to develop and validate data features for model inputs Interpret and communicate complex data trends to stakeholders and provide recommendations for data-driven decision-making Ensure data quality and governance standards, collaborating with data engineering teams to build quality data pipelines Develop data science algorithms & generate actionable insights as per platform needs and work closely with cross capability teams throughout solution development lifecycle from design to implementation & monitoring Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or Master’s degree or equivalent experience 12+ years of experience in data engineering roles in Data Warehouse 3+ years of experience as a Data Scientist with a focus on building models for analytics and insights in AWS environments Experience with AWS data and analytics services (e.g., Kinesis, Glue, Redshift, Athena, TimeStream) Hands-on experience with statistical analysis, anomaly detection and predictive modelling Proficiency with SQL, Python, and data visualization tools like QuickSight, Tableau, or Power BI Proficiency in data wrangling, cleansing, and feature engineering Preferred Qualifications Experience in security data analytics, focusing on threat detection and prevention Knowledge of AWS security tools and understanding of cloud data security principles Familiarity with deploying data workflows using CI/CD pipelines in AWS environments Background in working with real-time data streaming architectures and handling high-volume event-based data

Posted 2 months ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Gurugram

Work from Office

Job Summary As a Data Engineer at Synechron, you will play a pivotal role in harnessing data to drive business value. Your expertise will be essential in developing and maintaining data pipelines, ensuring data integrity, and facilitating analytics that inform strategic decisions. This role contributes significantly to our business objectives by optimizing data processing and enabling insightful reporting across the organization. Software Requirements Required: AWS Redshift (3+ years of experience) Spark (3+ years of experience) Python (3+ years of experience) Complex SQL (3+ years of experience) Shell scripting (2+ years of experience) Docker (2+ years of experience) Kubernetes (2+ years of experience) Bitbucket (2+ years of experience) Preferred: DBT Dataiku Kubernetes cluster management Overall Responsibilities Develop and optimize data pipelines using big data technologies, ensuring seamless data flow and accessibility. Collaborate with cross-functional teams to translate business requirements into technical solutions. Ensure high data quality and integrity in analytics and reporting processes. Implement data architecture and modeling best practices to support strategic objectives. Troubleshoot and resolve data-related issues, maintaining a service-first mentality to enhance customer satisfaction. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredShell scripting Databases/Data Management: EssentialAWS Redshift, Hive, Presto PreferredDBT Cloud Technologies: EssentialAWS PreferredKubernetes, Docker Frameworks and Libraries: EssentialSpark PreferredDataiku Development Tools and Methodologies: EssentialBitbucket, Airflow or Argo Workflows Experience Requirements 6-7 years of experience in data engineering or related roles. Strong understanding of data & analytics concepts, with proven experience in big data technologies. Experience in the financial services industry preferred but not required. Alternative pathwaysSignificant project experience in data architecture and analytics. Day-to-Day Activities Design and implement scalable data pipelines. Participate in regular team meetings to align on project goals and deliverables. Collaborate with stakeholders to refine data processes and analytics. Make informed decisions on data management strategies and technologies. Qualifications Bachelors degree in Computer Science, Data Engineering, or a related field (or equivalent experience). Certifications in AWS or relevant data engineering technologies preferred. Commitment to continuous professional development in data engineering and analytics. Professional Competencies Strong critical thinking and problem-solving capabilities, with a focus on innovation. Effective communication skills and stakeholder management. Ability to work collaboratively in a team-oriented environment. Adaptability and a willingness to learn new technologies and methodologies. Excellent time and priority management to meet deadlines and project goals.

Posted 2 months ago

Apply

3.0 - 7.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Job Summary: Synechron is seeking an experienced Senior Data Engineer with expertise in AWS, Apache Airflow, and DBT to design and implement scalable, reliable data pipelines. The role involves collaborating with data teams and business stakeholders to develop data solutions that enable actionable insights and support organizational decision-making. The ideal candidate will bring data engineering experience, demonstrating strong technical skills, strategic thinking, and the ability to work in a fast-paced, evolving environment. Software Requirements: Required: Strong proficiency in AWS services including S3, Redshift, Lambda, and Glue, with proven hands-on experience Expertise in Apache Airflow for workflow orchestration and pipeline management Extensive experience with DBT for data transformation and modeling Solid knowledge of SQL for data querying and manipulation Preferred: Familiarity with Hadoop, Spark, or other big data technologies Experience with NoSQL databases (e.g., DynamoDB, Cassandra) Knowledge of data governance and security best practices within cloud environments Overall Responsibilities: Lead the design, development, and maintenance of scalable and efficient data pipelines and workflows utilizing AWS, Airflow, and DBT Collaborate with data scientists, analysts, and business teams to gather requirements and translate them into technical solutions Optimize Extract, Transform, Load (ETL) processes to enhance data quality, integrity, and timeliness Monitor pipeline performance, troubleshoot issues, and implement improvements to ensure operational excellence Enforce data management, governance, and security protocols across all data flows Mentor junior data engineers and promote best practices within the team Stay current with emerging data technologies and industry trends, recommending innovations for the data ecosystem Technical Skills (By Category): Programming Languages: Essential: SQL, Python (preferred for scripting and automation) Preferred: Spark, Scala, Java (for big data integration) Databases/Data Management: Extensive experience with data warehousing (Redshift, Snowflake, or similar) and relational databases (MySQL, PostgreSQL) Familiarity with NoSQL databases such as DynamoDB or Cassandra is a plus Cloud Technologies: AWS cloud platform, leveraging services like S3, Lambda, Glue, Redshift, and IAM security features Frameworks and Libraries: Apache Airflow, dbt, and related data orchestration and transformation tools Development Tools and Methodologies: Git, Jenkins, CI/CD pipelines, Agile/Scrum environment experience Security Protocols: Knowledge of data encryption, access control, and compliance standards in cloud data engineering Experience Requirements: At least 8 years of professional experience in data engineering or related roles with a focus on cloud ecosystems and big data pipelines Demonstrated experience designing and managing end-to-end data workflows in AWS environments Proven success in collaborating with cross-functional teams and translating business requirements into technical solutions Prior experience mentoring junior engineers and leading data projects is highly desirable Day-to-Day Activities: Develop, deploy, and monitor scalable data pipelines using AWS, Airflow, and DBT Collaborate regularly with data scientists, analysts, and business stakeholders to refine data requirements and deliver impactful solutions Troubleshoot production data pipeline issues to resolve data quality or performance bottlenecks Conduct code reviews, optimize existing workflows, and implement automation to improve efficiency Document data architecture, pipelines, and governance practices for knowledge sharing and compliance Keep abreast of emerging data tools and industry best practices, proposing enhancements to existing systems Qualifications: Bachelor’s degree in Computer Science, Data Science, Engineering, or related field; Master’s degree preferred Professional certifications such as AWS Certified Data Analytics – Specialty or related credentials are advantageous Commitment to continuous professional development and staying current with industry trends Professional Competencies: Strong analytical, problem-solving, and critical thinking skills Excellent communication abilities to effectively liaise with technical and business teams Proven leadership in mentoring team members and managing project deliverables Ability to work independently, prioritize tasks, and adapt to changing business needs Innovative mindset focused on scalable, efficient, and sustainable data solutions

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 months ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE Role Description: The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications and Experience: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 months ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We are seeking an experienced SQL Developer with expertise in SQL Server Analysis Services (SSAS) and AWS to join our growing team. The successful candidate will be responsible for designing, developing, and maintaining SQL Server-based OLAP cubes and SSAS models for business intelligence purposes. You will work with multiple data sources, ensuring data integration, optimization, and performance of the reporting models. This role offers an exciting opportunity to work in a hybrid work environment, collaborate with cross-functional teams, and

Posted 2 months ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Roles and Responsibilities: Design and develop efficient data models and data warehouses to support business intelligence and analytics needs Design and develop dimensional models, star schemas ad OLAP cubes to enable complex analytics and reporting Build and maintain fact and dimension tables, ensuring proper relationships and key structures Build ETL processes to extract data from source systems, transform and cleanse data, and load into data warehouse Implement SQL queries, stored procedures, triggers, and functions to support data warehouse and data marts Monitor ETL performance and fine-tune SQL code for optimal efficiency Perform root cause analysis on data discrepancies and work to resolve issues Collaborate with business analysts and end users to understand reporting needs and improve BI systems Provide technical assistance to resolve all database issues related to performance, capacity and access. Ensure data integrity and quality in database systems. Maintain standard design considerations while building application DB Maintain Privacy by design and Data governance Implement normalization and build optimized and robust database by design Create required documentation like data dictionaries, mapping documents, ER diagrams etc Keep current on data warehouse best practices, trends, and technologies Skills Required: Must have worked on designing databases and involved in data modelling activities to set up a new application DB. At least 5 years of proven experience using MySQL RDBMS and Data warehouses Experience in ETL tools like DMS, Talend, SSIS, scripting in a Unix like env. Experience in building data validation reports and data profiling. Exposure to Cloud database design and principles AWS preferred Experience in Data governance and ability to create domain-driven schemas. Experience in SQL, R/Python programming and optimization Ability to handle large and complex databases. Good communication skills with the ability to collaborate with all stakeholders to gather requirements. Excellent problem solving and analytical skills. Knowledge on Machine Learning algorithms is a bonus Should be highly adaptive and quick learner Should be a self-starter, take responsibility, ownership and self-driven Good Team player, commitment to high quality work Qualifications: 5+ years of experience in Database Modelling and ETL Tools Expert in SQL including complex queries, joins, subqueries, and stored procedures etc Must have experience in MySQL and AWS Redshift Broad awareness of database workloads and use cases, including performance, availability, and scalability. Demonstrated proficiency in database performance tuning, optimization and troubleshooting Familiarity with Amazon Web Services (EC2, EBS, S3, etc.) BS in Computer Science, Information Systems or related field, Masters preferred Contact: 9019730396 Email: himani@matrixhrservices.com

Posted 2 months ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Navi Mumbai

Work from Office

Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS , Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL, Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance

Posted 2 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Data Analyst (Visualisation Engineer) - Skills and Qualifications SQL - Mandatory Proficiency in Tableau, Power BI for data visualization. - Mandatory Strong programming skills in Python, including experience with data analysis libraries . - Mandatory Knowledge of AWS services like S3, Redshift, Glue, and Lambda. – Nice to have Familiarity with orchestration tools like Apache Airflow and AWS Step Functions. – Nice to have Understanding of statistical concepts and methodologies. Excellent communication and presentation skills. Job Summary We are seeking a highly skilled Sr. Developer with 6 to 9 years of experience to join our dynamic team. The ideal candidate will have extensive experience in Tableau API Database and SQL Tableau Cloud and Tableau. or Power BI Report Builder Power BI Service DAX - Power BI MS Power BI Database and SQL. This role is hybrid with day shifts and no travel required. The Sr. Developer will play a crucial role in developing and maintaining our data visualization solutions ensuring data accuracy and providing actionable insights to drive business decisions. Responsibilities Develop and maintain Tableau or power BI reports and dashboards and reports to provide actionable insights. Utilize Tableau API or power BI reports to integrate data from various sources and ensure seamless data flow. Design and optimize database schemas to support efficient data storage and retrieval. Write complex SQL queries to extract manipulate and analyze data. Collaborate with business stakeholders to understand their data needs and translate them into technical requirements. Ensure data accuracy and integrity by implementing data validation and quality checks. Provide technical support and troubleshooting for Tableau- or power BI related issues. Stay updated with the latest Tableau or power BI features and best practices to enhance data visualization capabilities. Conduct performance tuning and optimization of Tableau or power BI dashboards and reports. Train and mentor junior developers on Tableau or power BI and SQL best practices. Work closely with the data engineering team to ensure data pipelines are robust and scalable. Participate in code reviews to maintain high-quality code standards. Document technical specifications and user guides for developed solutions. Qualifications( Tableau) Must have extensive experience with Tableau API and Tableau Cloud. Strong proficiency in Database and SQL for data extraction and manipulation. Experience with Tableau Work Model in a hybrid environment. Excellent problem-solving skills and attention to detail. Ability to collaborate effectively with cross-functional teams. Strong communication skills to convey technical concepts to non-technical stakeholders. Nice to have experience in performance tuning and optimization of Tableau solutions. Qualifications( Power BI) Possess strong expertise in Power BI Report Builder Power BI Service DAX Power BI and MS Power BI. Demonstrate proficiency in SQL and database management. Exhibit excellent problem-solving and analytical skills . Show ability to work collaboratively in a hybrid work model. Display strong communication skills to interact effectively with stakeholders. Have a keen eye for detail and a commitment to data accuracy. Maintain a proactive approach to learning and adopting new technologies.

Posted 2 months ago

Apply

6.0 - 11.0 years

4 - 8 Lacs

Kolkata

Work from Office

Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

5.0 - 8.0 years

9 - 19 Lacs

Gurugram, Bengaluru

Work from Office

Hi, Greetings Of the day Hiring for an MNC for a Sr Data Engineer Profile: Sr Data Engineer Experience-4-10years Interview Mode-Virtual Mandatory Skills : Pyspark, Python, AWS(Glue,EC2,Redshift,Lambda) Python, Spark, Big Data, ETL, SQL, etl, Data Warehousing. Good to have: Data structures and algorithms. Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Data Engineer or similar role Experience with Python, and big data technologies (Hadoop, Spark, Kafka, etc.) Experience with relational SQL and NoSQL databases Strong analytic skills related to working with unstructured datasets Strong project management and organizational skills Experience with AWS cloud services: EC2, Lambda(step function), RDS, Redshift Ability to work in a team environment Excellent written and verbal communication skills Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Interested candidates can share the resume on the mail id avanya@niftelresources.com or contact on 9219975840 .

Posted 2 months ago

Apply

5.0 - 8.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Technical Skills: Python, Py Spark, Sql, Redshift , S3 , Cloud Watch, Lambda, AWS Glue EMR Step Function Databricks Having knowledge on visulalization tool will add value Experience : Should have worked in technical delivery of above services preferable in similar organizations and having good communication skills. Certifications Preference of AWS Data Engineer Certification

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Interested candidates can share their updated CV at: heena.ruchwani@gspann.com Join GSPANN Technologies as a Senior AWS Data Engineer and play a critical role in designing, building, and optimizing scalable data pipelines in the cloud. Were looking for an experienced engineer who can turn complex data into actionable insights using the AWS ecosystem. Key Responsibilities: Design, develop, and maintain scalable data pipelines on AWS. Work with large datasets to perform ETL/ELT transformations using tools like AWS Glue, EMR, and Lambda . Optimize and monitor data workflows , ensuring reliability and performance. Collaborate with data analysts, architects, and other engineers to build data solutions that support business needs. Implement and manage data lakes , data warehouses , and streaming architectures . Ensure data quality, governance, and security standards are met across platforms. Participate in code reviews , documentation, and mentoring of junior data engineers. Required Skills & Qualifications: 5+ years of experience in data engineering , with strong hands-on work in the AWS cloud ecosystem . Proficiency in Python , PySpark , and SQL . Strong experience with AWS services : AWS Glue , Lambda , EMR , S3 , Athena , Redshift , Kinesis , etc. Expertise in data pipeline development and workflow orchestration (e.g., Airflow , Step Functions ). Solid understanding of data warehousing and data lake architecture. Experience with CI/CD , version control (GitHub) , and DevOps practices for data environments. Familiarity with Snowflake , Databricks , or Looker is a plus. Excellent communication and problem-solving skills. Interested candidates can share their updated CV at: heena.ruchwani@gspann.com

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies