Home
Jobs

1759 Redshift Jobs - Page 40

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). WHAT YOU’LL DO: Collaborate with software development teams to enable reliable and stable software services Develop software solutions that enhance the reliability and performance of services Optimize software release and deployment of ABC systems and cloud infrastructure in AWS Be an advocate for availability, reliability and scalability practices Define and adhere to Service Level Objectives and adhere to standard processes Enable product engineering teams through support of automated deployment pipelines Collaborate with product development as an advocate for scalable architectural approaches Advocate for infrastructure and application security practices in the development process Respond to production incidents in a balanced rotation with other SREs and Senior Engineers Lead a culture of learning and continuous improvement through incident postmortems and retrospectives WHAT YOU’LL NEED: 5+ years of demonstrable experience as a DevOps Engineer across our technology stack Proficiency in one programming language: Go, PHP, NodeJS, Python or Java Experience with infrastructure running 100% in AWS Experience with service-oriented architecture deployed on ECS Fargate & Lambda Database exposure to: MySQL, Postgres, MongoDB, DynamoDB, Redshift Familiar with infrastructure automation with Terraform Familiar with observability & monitoring using: Honeycomb, NewRelic, CloudWatch, Grafana Exposure to CI/CD pipelines with GitHub, CircleCI and Jenkins Willing to be part of a rotating on-call schedule Open to irregular work hours to support teams in different time zones WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com Show more Show less

Posted 2 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Application and Emerging Technology services, will provide you with a unique opportunity to help our clients identify and prioritise emerging technologies that can help solve their business problems. We help clients design approaches to integrate new technologies, skills, and processes so they can get the most out of their technology investment and drive business results and innovation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Responsibilities 6-8 years of hands on application to lead and perform the development with experience in one or more programming languages like Python, Pyspark etc. 4-6 years of hands on experience in development and deployment of cloud native solutions leveraging AWS Services: Compute(EC2, Lambda), Storage (S3), Database (RDS, Aurora, Postgres, DynamoDB), Orchestration (Apache Airflow, Step Function, SNS), ETL/Analytics(Glue, EMR, Athena, Redshift), Infra (Cloud Formation, Code Pipeline), Data Migration (AWS DataSync, AWS DMS), APIGateway, IAM etc. Expertise in the handling large data sets and data models in terms of design, data model creation, development of data pipeline for data ingestion, migration and transformation Strong on SQL Server, stored procedure Knowledge on API's , SSO, streaming technology will be nice to have Mandatory Skill Sets AWS, Pyspark, Spark Preferred Skill Sets AWS, Pyspark, Spark Years Of Experience Required 6 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Generative AI, JavaScript, Node.js, Python (Programming Language) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

DevSecOps Engineer – Deputy Manager Role Overview: As a DevSecOps Engineer, you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Work you'll do: Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team: US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications: § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience between 8-10 yrs is required. § Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. § 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). § 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. § 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. § 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. § 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. § Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. § General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) § General knowledge of networking, firewalls, and load balancers. § Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. Work Location: Hyderabad How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities— including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world- class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302704

Posted 2 weeks ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

DevSecOps Engineer – CL 4 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers. Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302719

Posted 2 weeks ago

Apply

3.0 years

4 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

- 3+ years of data engineering experience - Bachelor’s degree in Computer Science, Engineering, or a related technical discipline As a Data Engineer on the Data and AI team, you will design and implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities - Design and implement ETL/ELT frameworks that handle large-scale data operations, while building reusable components for data ingestion, transformation, and orchestration while ensuring data quality and reliability. - Establish and maintain robust data governance standards by implementing comprehensive security controls, access management frameworks, and privacy-compliant architectures that safeguard sensitive information. - Drive the implementation of data solutions, both real-time and batch, optimizing them for both analytical workloads and AI/ML applications. - Lead technical design reviews and provide mentorship on data engineering best practices, identifying opportunities for architectural improvements and guiding the implementation of enhanced solutions. - Build data quality frameworks with robust monitoring systems and validation processes to ensure data accuracy and reliability throughout the data lifecycle. - Drive continuous improvement initiatives by evaluating and implementing new technologies and methodologies that enhance data infrastructure capabilities and operational efficiency. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including developing and optimizing ETL/ELT processes, implementing data governance controls, and reviewing code for data processing systems. You'll work closely with software engineers, scientists, and product managers, participating in technical design discussions and sharing your expertise in data architecture and engineering best practices. Your responsibilities extend to communicating with non-technical stakeholders, explaining data-related projects and their business impact. You'll also mentor junior engineers and contribute to maintaining comprehensive technical documentation. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved complex technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

1.0 years

4 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

- 1+ years of data engineering experience - Bachelor’s degree in Computer Science, Engineering, or a related technical discipline As a Data Engineer on the Data and AI team, you will implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities • Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative ideas to market. • Contribute to the implementation and maintenance of data architecture, infrastructure and storage solutions under the guidance of senior engineers, focusing on data quality and reliability. • Assist in building data pipelines, pipeline orchestration, data governance framework, data quality testing and pipeline management using continuous integration and deployments while learning best practices from experienced team members. • Participate in technical discussions and contribute to database design decisions, learning about scalability and reliability considerations while implementing optimized code according to team standards. • Execute assigned technical tasks within larger projects, writing well-tested ETL/ELT pipelines, providing thorough documentation, and following established data engineering practices. • Work in an agile environment to deliver high-quality data pipelines supporting real-time and end of day data requirements. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including troubleshooting, data quality deep dives, developing optimized ETL/ELT processes, implementing data governance controls, unit testing code and gets it reviewed. You’ll continually improve ongoing processes, automating or simplifying data engineering efforts. You'll work closely with senior engineers, data scientists, and product managers, participating in technical design discussions and learn the business context and technologies in data architecture. Your responsibilities extend to communicating with non-technical stakeholders, explaining root cause and their business impact. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work continually. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities About the team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

5.0 years

6 - 8 Lacs

Bengaluru

On-site

GlassDoor logo

Location: Bengaluru, Karnataka Experience: 5+ Years Education: Bachelor’s degree in computer science, Engineering, or related field About the Role: As a Technical Sales Support Engineer for the Global Technical Sales Environment, you will be responsible for the management and optimization of cloud resources that support technical sales engagements. This role involves provisioning, maintaining, and enhancing the infrastructure required for POCs, workshops, and product demonstrations for the technical sales community. Beyond infrastructure management, you will play a critical role in automation, driving efficient deployments, optimizing cloud operations, and developing tools to enhance productivity. Security will be a key focus, requiring proactive identification and mitigation of vulnerabilities to ensure compliance with enterprise security standards. Expertise in automation, scripting, and infrastructure development will be essential to deliver scalable, secure, and high-performance solutions, supporting customer, prospect, and partner engagements. Key Responsibilities: Cloud Infrastructure Management & Support of TechSales activities: Install, upgrade, configure, and optimize the Informatica platform, both on-premises and Cloud platform runtime environments. Manage the configuration, security, and networking aspects of Informatica Cloud demo platforms and resources. Coordinate with Cloud Trust Operations to ensure smooth implementation of Informatica Cloud Platform changes. Monitor cloud environments across AWS, Azure, GCP, and Oracle Cloud to detect potential issues and mitigate risks proactively. Analyse cloud resource utilization and implement cost-optimization strategies while ensuring performance and reliability. Security & Compliance: Implement security best practices, including threat monitoring, server log audits, and compliance measures. Work towards identifying and mitigating vulnerabilities to ensure a robust security posture. Automation & DevOps Implementation: Automate deployments and streamline operations using Bash/Python, Ansible, and DevOps methodologies. Install, manage, and maintain Docker containers to support scalable environments. Collaborate with internal teams to drive automation initiatives that enhance efficiency and reduce manual effort. Technical Expertise & Troubleshooting: Apply strong troubleshooting skills to diagnose and resolve complex issues on Informatica Cloud demo environments, Docker containers, and Hyperscalers (AWS, Azure, GCP, OCI). Maintain high availability and performance of the Informatica platform and runtime agent. Manage user roles, access controls, and permissions within the Informatica Cloud demo platform. Continuous Learning & Collaboration: Stay updated on emerging cloud technologies and automation trends through ongoing professional development. Work closely with Informatica support to drive platform improvements and resolve technical challenges. Scheduling & On-Call Support: Provide 24x5 support as per business requirements, ensuring seamless operations. Role Essentials: Automation & DevOps Expertise: Proficiency in Bash/Python scripting. Strong understanding of DevOps principles and CI/CD pipelines. Hands-on experience in automation tools like Ansible. Cloud & Infrastructure Management: Experience in administering Cloud data management platforms and related SaaS. Proficiency in Unix/Linux/Windows environments. Expertise in cloud computing platforms (AWS, Azure, GCP, OCI). Hands-on experience with Docker, Containers, and Kubernetes. Database & Storage Management: Experience with relational databases (MySQL, Oracle, Snowflake). Strong SQL skills for database administration and optimization. Monitoring & Observability: Familiarity with monitoring tools such as Grafana. Education & Experience: BE or equivalent educational background, with a combination of relevant education and experience being considered. Minimum 5+ years of relevant professional experience. This role offers an opportunity to work in a dynamic, cloud-driven, and automation-focused environment, contributing to the seamless execution of technical sales initiatives. Preferred Skills: Experience in administering Informatica Cloud (IDMC) and related products. Experience with storage solutions like Snowflake, Databricks, Redshift, Azure Synapse and improving database performance. Hands-on experience with Informatica Platform (On-Premises).

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Imagine what you could do here. At Apple, we believe new insights have a way of becoming excellent products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The people here at Apple don’t just build products - they build the kind of wonder that’s revolutionised entire industries. It’s the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. As a Software Engineer, you are an integral part of a small data-centric team driving large-scale data infrastructure and processes development, implementation, and improvement. Our organization thrives on collaborative partnerships. Join and play a key role in developing and driving the adoption of Data Mesh and data-centric micro-services. Apple's Manufacturing Systems and Infrastructure (MSI) team is responsible for capturing, consolidating and tracking all manufacturing data for Apple’s products and modules worldwide. Our tools enable teams to confidently use data to shape the next generation of product manufacturing at Apple. We seek a practitioner with experience building large-scale data platforms, analytic tools, and solutions. If you are passionate about making data easily accessible, trusted, and available across the entire business at scale, we'd love to hear from you. Description As a Software Engineer, you will work closely with cross-functional teams to understand business requirements, design scalable solutions, and ensure the integrity and availability of our data. The ideal candidate will have a deep understanding of cloud technologies, UI technologies, software engineering best practices, and a proven track record of successfully delivering complex projects. - Lead the design and implementation of cloud-based data architectures. - Collaborate with data scientists, analysts, and business stakeholders to understand requirements. - Stay current with industry trends and emerging technologies in cloud engineering. Minimum Qualifications B.Tech. Degree in computer science or equivalent field Hands-on programming experience Experience with React frontend framework, deep understanding of React.js, and Redux Proficient in programming languages such as Python, Java, Scala, GoLang, JavaScript Proficiency in cloud services such as AWS, Azure, or Google Cloud Expertise in building UI and data integration services Experience with streaming UI technologies Experience building data streaming solutions using Apache Spark/ Apache Storm/ Flink /Flume Preferred Qualifications Knowledge of data warehouse solutions (Redshift, BigQuery, Snowflake, Druid) Certification in cloud platforms Knowledge of machine learning and data science concepts Contributions to the open source community Submit CV

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Who You’ll Work With You will be part of the Digital Design & Merchandising, Product Creation, Planning, and Manufacturing Technology team at Converse. You will take direction and work primarily with the Demand and Supply team, supporting business planning space. You'll work with a talented team of engineers, data architects, and business stakeholders to design and implement scalable data integration solutions on cloud-based platforms to support our planning org. The successful candidate will be responsible for leading the integration of planning systems, processes, and data across the organization Who We Are Looking For We're looking for a seasoned Cloud Integration Lead with expertise in Databricks, Apache Spark, and cloud-based data integration. You'll have a strong technical background, excellent collaboration skills, and a passion for delivering high-quality solutions. The Ideal Candidate Will Have 5+ years of experience with Databricks, Apache Spark, and cloud-based data integration. Strong Technical expertise with cloud-based platforms, including AWS and or Azure cloud. Strong programming skills in languages like SQL, Python, Java, or Scala. 3+ years' experience with cloud-based data infrastructure and integration leveraging tools like S3, Airflow, EC2, AWS Glue, DynamoDB & Lambdas, Athena, AWS Code deploy, Azure Data Factory, or Google Cloud Dataflow. Experience with Jenkins and other CI/CD tools like GitLab CI/CD, CircleCI, etc. Experience with containerization using Docker and Kubernetes. Experience with infrastructure such as code using tools like Terraform or CloudFormation Experience with Agile development methodologies and version control systems like Git Experience with IT service management tools like ServiceNow, JIRA, etc. Data warehousing solutions, such as Amazon Redshift, Azure Synapse Analytics, or Google BigQuery will be a plus but not mandatory. Data science and machine learning concepts, including TensorFlow, PyTorch, or scikit-learn will be a plus but not mandatory. Strong technical background in computer science, software engineering, or a related field. Excellent collaboration, communication, and interpersonal skills. Experience with data governance, data quality, and data security principles. Ability to lead and mentor junior team members. AWS Certified Solutions Architect or AWS Certified Developer Associate or Azure Certified Solutions Architect certification. What You’ll Work On Design and implement scalable data integration solutions using Databricks, Apache Spark, and cloud-based platforms. Develop and implement cloud-based data pipelines using Databricks, Nifi, AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. Develop and maintain technical standards, best practices, and documentation. Integrate various data sources, including on-premises and cloud-based systems, applications, and databases. Ensure data quality, integrity, and security throughout the integration process. Collaborate with data engineering, data science, and business stakeholders to understand requirements and deliver solutions. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Description Data Pipeline solutions based on the requirements and incorporating various optimization techniques based on various sources involved and data volume. Understanding of storage architectures such as Data Warehouse, Data Lake, and Lake houses Deciding tech stack and development standards, proposing tech solutions and architectural patterns and recommending best practices for the big data solution Providing thought leadership and mentoring to the data engineering team on how data should be stored and processed more efficiently and quickly at scale Ensure adherence with Security and Compliance policies for the products Stay up to date with evolving cloud technologies and development best practices including open-source software. Work in an Agile Environment and provide optimized solutions to the customers and JIRA for project management Proven problem-solving skills with the ability to anticipate roadblocks, diagnose problems and generate effective solutions Analyze market segments and customer base to develop market solutions Experience in working with batch processing / real-time systems using various Enhance/Support solutions using Pyspark/EMR, SQL and databases, AWS Athena, S3, Redshift, Lambda, AWS Glue, and other Data Engineering technologies. Proficiency in SQL Writing, SQL Concepts, Data Modelling Techniques, Data validation, Data quality check & Data Engineering Concepts Proficiency in design, creation, deployment, review and get the final sign off from the client by following the best practices in SDLC of existing and new products. Experience in technologies like Databricks, HDFS, Redshift, Hadoop, S3, Athena, RDS, Elastic MapReduce on AWS or similar services in GCP/Azure Scheduling and monitoring of Spark jobs using tools like Airflow, Oozie Familiar with version control tools like Git, Code Commit, Jenkins, Code Pipeline Work in a Cross functional team along with other Data Engineers, QA Engineers, and DevOps Engineers. Develop, test, and implement data solutions based on finalized design documents. Familiar with Unix/Linux and Shell Scripting Qualifications Experience: 4-7 years of experience Excellent communication and problem-solving skills. Highly proficient in Project Management principles, methods, techniques, and tools Minimum 2 to 4 years of working experience in Pyspark, SQL, AWS development Experience of working as a mentor for junior team members Hands on experience in ETL process, performance optimization techniques are a must Candidate should have taken part in Architecture design and discussion Minimum of 4 years of experience in working with batch processing/ real-time systems Using various technologies like Databricks, HDFS, Redshift, Hadoop, Elastic MapReduce on AWS, Apache Spark, Hive/Impala and HDFS and NoSQL databases or similar services in Azure or GCP Minimum of 4 years of experience working in Datawarehouse or Data Lake Projects in a role beyond just Data consumption. Minimum of 4 years of extensive working knowledge in AWS building scalable solutions. Equivalent level of experience in Azure or Google Cloud is also acceptable Minimum of 3 years of experience in programming languages (preferably Python) Experience in Pharma Domain will be a very Big Plus. Familiar with tools like Git, Code Commit, Jenkins, Code Pipeline Familiar with Unix/Linux and Shell Scripting Additional Skills: Exposure to Pharma and life sciences would be an added advantage. Certified in any cloud technologies like AWS, GCP, Azure. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Job description What You’ll Be Doing The Senior Data Engineer will help build the next generation of cloud-based data tools and reporting for Experian’s MCE contact center division. Valuable, accurate, and timely information is core to our success, and this highly impactful role will be an essential part of that. Delivery pace and meeting our commitments are a primary focus to ensure that we are providing information at the speed of business. As part of this, understanding the business-side logic, environment, and workflows is important, and in effect, we need someone that is an incredible problem solver. If you are a self-driven, determined engineer that loves data, creating cutting edge tools, and moving fast, this position is for you! We are a results-oriented team that is looking to attract and reward high performing individuals. Come join us! Responsibilities Include Complex Dataset Construction: Construct datasets using complex, custom stored procedures, views, and queries. Strong SQL development skills are a must, preferably within Redshift and/or PostgreSQL. Full-stack Data Solutions: Develop full lifecycle data solutions from data ingestion (using custom AWS-based data movement/ETL processes via Glue with Python code) to downstream real-time and historical reports. Business Need to Execution Focus: Understand data-driven business objectives and develop solutions leveraging various technologies and solve for those needs. Along with great problem-solving skills, a strong desire to learn our operational environment is a necessity. Delivery Speed Enablement: Build reusable data-related tools, CI/CD pipelines, and automated testing. Enable DevOps model usage focused on continuous improvement, and ultimately reduce unnecessary dependencies. Shift Security Left: Ensure security components and requirements are implemented via automation up-front as part of all solutions being developed. Focus on the Future: Stay current on industry best practices and emerging technologies and proactively translate those into data platform improvements. Be a Great Team Player: Train team members in proper coding techniques, create proper documentation as needed, and be a solid leader on the team as a senior-level engineer. Support US Operations: Operate partially within US Eastern time zone to ensure appropriate alignment and coordination with the US-based teams. Qualifications Required What your background looks like Extensive experience in modern data manipulation and preparation via SQL code and translating business requirements into usable reports. Solid automation skillset and ability to design and create solutions to drive out manual data/report assembly processes within an organization. Experience constructing reports within a BI tool while also taking ownership of upstream and downstream elements. Able to create CI/CD pipelines that perform code deployments and automated testing. Ability to identify business needs and proactively create reporting tools that will consistently add value. Strong ability and willingness to help others and be an engaged part of the team. Patience and a collaborative personality are a must; we need a true team player that can help strengthen our overall group. Goal-driven individual; must have a proven career track record of achievement. We want the best of the best and reward stellar performers! Skills 3+ years developing complex SQL code required, preferably within Redshift and/or PostgreSQL 1+ years using Python, Java, C#, or other similar object-oriented language CI/CD pipeline construction, preferably using GitHub Actions Git experience General knowledge of AWS Services, with a preference in Glue and Lambda. Infrastructure-as-code (CloudFormation, Terraform, or similar product) a plus Google Looker experience a plus (not required) Qualifications Qualifications We are looking for 4 to 8 years of experience in which 3+ years developing complex SQL code required, preferably within Redshift and/or PostgreSQL 1+ years using Python, Java, C#, or other similar object-oriented language CI/CD pipeline construction, preferably using GitHub Actions General knowledge of AWS Services, with a preference in Glue and Lambda. Infrastructure-as-code (CloudFormation, Terraform, or similar product) a plus Google Looker experience a plus (not required) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Infrastructure Lead/Architect Job Type: Full-Time Location: On-site Hyderabad, Pune or New Delhi Job Summary Join our customer's team as an Infrastructure Lead/Architect and play a pivotal role in architecting, designing, and implementing next-generation cloud infrastructure solutions. You will drive cloud and data platform initiatives, ensure system scalability and security, and act as a technical leader, shaping the backbone of our customers’ mission-critical applications. Key Responsibilities Architect, design, and implement robust, scalable, and secure AWS cloud infrastructure utilizing services such as EC2, S3, Lambda, RDS, Redshift, and IAM. Lead the end-to-end design and deployment of high-performance, cost-efficient Databricks data pipelines, ensuring seamless integration with business objectives. Develop and manage data integration workflows using modern ETL tools in combination with Python and Java scripting. Collaborate with Data Engineering, DevOps, and Security teams to build resilient, highly available, and compliant systems aligned with operational standards. Act as a technical leader and mentor, guiding cross-functional teams through infrastructure design decisions and conducting in-depth code and architecture reviews. Oversee project planning, resource allocation, and deliverables, ensuring projects are executed on-time and within budget. Proactively identify infrastructure bottlenecks, recommend process improvements, and drive automation initiatives. Maintain comprehensive documentation and uphold security and compliance standards across the infrastructure landscape. Required Skills and Qualifications 8+ years of hands-on experience in IT infrastructure, cloud architecture, or related roles. Extensive expertise with AWS cloud services; AWS certifications are highly regarded. Deep experience with Databricks, including cluster deployment, Delta Lake, and machine learning integrations. Strong programming and scripting proficiency in Python and Java. Advanced knowledge of ETL/ELT processes and tools such as Apache NiFi, Talend, Airflow, or Informatica. Proven track record in project management, leading cross-functional teams; PMP or Agile/Scrum certifications are a plus. Familiarity with CI/CD workflows and Infrastructure as Code tools like Terraform and CloudFormation. Exceptional problem-solving, stakeholder management, and both written and verbal communication skills. Preferred Qualifications Experience with big data platforms such as Spark or Hadoop. Background in regulated environments (e.g., finance, healthcare). Knowledge of Kubernetes and AWS container orchestration (EKS). Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As a Data Engineer on the Data and AI team, you will design and implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Design and implement ETL/ELT frameworks that handle large-scale data operations, while building reusable components for data ingestion, transformation, and orchestration while ensuring data quality and reliability. Establish and maintain robust data governance standards by implementing comprehensive security controls, access management frameworks, and privacy-compliant architectures that safeguard sensitive information. Drive the implementation of data solutions, both real-time and batch, optimizing them for both analytical workloads and AI/ML applications. Lead technical design reviews and provide mentorship on data engineering best practices, identifying opportunities for architectural improvements and guiding the implementation of enhanced solutions. Build data quality frameworks with robust monitoring systems and validation processes to ensure data accuracy and reliability throughout the data lifecycle. Drive continuous improvement initiatives by evaluating and implementing new technologies and methodologies that enhance data infrastructure capabilities and operational efficiency. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including developing and optimizing ETL/ELT processes, implementing data governance controls, and reviewing code for data processing systems. You'll work closely with software engineers, scientists, and product managers, participating in technical design discussions and sharing your expertise in data architecture and engineering best practices. Your responsibilities extend to communicating with non-technical stakeholders, explaining data-related projects and their business impact. You'll also mentor junior engineers and contribute to maintaining comprehensive technical documentation. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved complex technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 3+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2996966 Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As a Data Engineer on the Data and AI team, you will implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative ideas to market. Contribute to the implementation and maintenance of data architecture, infrastructure and storage solutions under the guidance of senior engineers, focusing on data quality and reliability. Assist in building data pipelines, pipeline orchestration, data governance framework, data quality testing and pipeline management using continuous integration and deployments while learning best practices from experienced team members. Participate in technical discussions and contribute to database design decisions, learning about scalability and reliability considerations while implementing optimized code according to team standards. Execute assigned technical tasks within larger projects, writing well-tested ETL/ELT pipelines, providing thorough documentation, and following established data engineering practices. Work in an agile environment to deliver high-quality data pipelines supporting real-time and end of day data requirements. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including troubleshooting, data quality deep dives, developing optimized ETL/ELT processes, implementing data governance controls, unit testing code and gets it reviewed. You’ll continually improve ongoing processes, automating or simplifying data engineering efforts. You'll work closely with senior engineers, data scientists, and product managers, participating in technical design discussions and learn the business context and technologies in data architecture. Your responsibilities extend to communicating with non-technical stakeholders, explaining root cause and their business impact. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work continually. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 1+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2996963 Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As a Data Engineer on the Data and AI team, you will implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative ideas to market. Contribute to the implementation and maintenance of data architecture, infrastructure and storage solutions under the guidance of senior engineers, focusing on data quality and reliability. Assist in building data pipelines, pipeline orchestration, data governance framework, data quality testing and pipeline management using continuous integration and deployments while learning best practices from experienced team members. Participate in technical discussions and contribute to database design decisions, learning about scalability and reliability considerations while implementing optimized code according to team standards. Execute assigned technical tasks within larger projects, writing well-tested ETL/ELT pipelines, providing thorough documentation, and following established data engineering practices. Work in an agile environment to deliver high-quality data pipelines supporting real-time and end of day data requirements. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including troubleshooting, data quality deep dives, developing optimized ETL/ELT processes, implementing data governance controls, unit testing code and gets it reviewed. You’ll continually improve ongoing processes, automating or simplifying data engineering efforts. You'll work closely with senior engineers, data scientists, and product managers, participating in technical design discussions and learn the business context and technologies in data architecture. Your responsibilities extend to communicating with non-technical stakeholders, explaining root cause and their business impact. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work continually. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 1+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2996965 Show more Show less

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role Overview We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs. What you'll be doing: Design, develop, and optimise DBT models to support scalable data transformations Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance What you'll have: 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch) Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform) Ability to effectively communicate timelines and deliver MVPs set for the sprint Strong analytical and problem-solving skills, with the ability to work across cross-functional teams Nice to haves: Experience in hardware manufacturing data processing Contributions to open-source data engineering tools Knowledge of Tableau or other BI tools for data visualization Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Amazon Prime is a program that provides millions of members with unlimited one-day delivery, unlimited streaming of video and music, secure online photo storage, access to kindle e-books as well as Prime special deals on Prime Day. In India, Prime members get unlimited free One-Day and Two-day delivery, video streaming and early and exclusive access to deals. After the launch in 2016, the Amazon Prime team is now looking for a detailed oriented business intelligence engineer to lead the business intelligence for Prime and drive member insights. At Amazon, we're always working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a dynamic, organized, and customer-focused Analytics expert to join our Amazon Prime Analytics team. The team supports the Amazon India Prime organization by producing and delivering metrics, data, models and strategic analyses. This is an Individual contributor role that requires an individual with excellent team leadership skills, business acumen, and the breadth to work across multiple Amazon Prime Business Teams, Data Engineering, Machine Learning and Software Development teams. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and a proven ability to work in a fast-paced and ever-changing environment. Key job responsibilities The Successful Candidate Will Work With Multiple Global Site Leaders, Business Analysts, Software Developers, Database Engineers, Product Management In Addition To Stakeholders In Business, Finance, Marketing And Service Teams To Create a Coherent Customer View. They Will Define and lead the data strategy of various analytical products owned with Prime Analytics team. Develop and improve the current data architecture using AWS Redshift, AWS S3, AWS Aurora (Postgres) and Hadoop/EMR. Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability. Create entire ML framework for Data Scientists in AWS Bedrock, Sagemaker and EMR clusters Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of advertiser experience. Design and manage data models that serve multiple Weekly Business Reports (WBRs) and other business critical reporting Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build data pipelines and enable ML models for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in performing data extraction, data transformation, building and managing data pipelines to ensure data availability for ML & LLM models of IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in working with SQL, Scripting (Python, typescript, javascript), databases, ML/LLM models, big data technologies such as Apache Spark (Pyspark, Spark SQL). An ideal candidate will be someone who is a self-starter that can start with a requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and 'gets work done' in business time. Key job responsibilities Build end to end data extraction, data transformation and data pipelines to ensure data availability for ML & LLM models that are critical to IN businesses. Enable ML/LLM tools by setting up all the required underlying data infrastructure, data pipelines and permissions to generate training and inference data for the ML models. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Scripting and Amazon/AWS big data technologies Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Drive operational excellence strongly and build automation and mechanisms to reduce operations Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Basic Qualifications 2+ years of data engineering experience Experience with SQL Experience with one or more scripting language (e.g., Python, KornShell) Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

We are#hiring forone ofour #client for the role of #dataengineer #7 years #jaipur Data Engineer - DWH Data ingestion and transformation in AWS, and coordinating tasks amongst the team Our Data Engineers will typically have: Work in building and architecting multiple Data pipelines, end to end ETL and ELT processes for DBuild, maintain, and monitor batch and real-time ETL pipelines in an AWS architecture (Kinesis, S3, EMR, RedShift, etc.) Work closely with the Data Analytics teams to develop a clear understanding of data and data infrastructure needs; assist with data-related technical issues Develop Data strategy (source, flow of data, storage, and usage), best practices, and patterns Perform Data validation and quality assurance Present technical solutions to various stakeholders Provide day-to-day support of the DW and DL environments, with excellent communications across teams, monitor new deployments and services, escalating issues where appropriate. Who we prefer? Data Warehousing concepts Building ETL pipelines Performance Tuning of SQL queries Data modeling, architecture, and Design data systems Job Scheduling Frameworks Documentation Skills Good to have – AWS, EMR, Spark Education Qualification: BE / B.Tech / M.tech from Tier1 institutes Show more Show less

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Markovate. At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault Requirements : 9+ years of experience in data engineering and data architecture. Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness. Must be highly collaborative and team oriented with commitment to Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze ? silver ? gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great to have: Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in SQL and at least one programming language (e.g., Python, it's like to be at Markovate : At Markovate, we thrive on collaboration and embrace every innovative idea. We invest in continuous learning to keep our team ahead in the AI/ML landscape. Transparent communication is keyevery voice at Markovate is valued. Our agile, data-driven approach transforms challenges into opportunities. We offer flexible work arrangements that empower creativity and balance. Recognition is part of our DNAyour achievements drive our success. Markovate is committed to sustainable practices and positive community impact. Our people-first culture means your growth and well-being are central to our mission. Location : hybrid model 2 days onsite. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

The Opportunity We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data team in Gurgaon. The ideal candidate will have a strong background in designing, building, and maintaining robust, scalable, and efficient data pipelines and data warehousing solutions. You will play a crucial role in transforming raw data into actionable insights, enabling data-driven decision-making across the Responsibilities : Data Pipeline Development : Design, develop, construct, test, and maintain highly scalable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala, Java). Data Warehousing : Build and optimize data warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) to support reporting, analytics, and machine learning initiatives. Data Modeling : Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and design optimal data models (dimensional, relational, etc.). Performance Optimization : Identify and implement solutions for data quality issues, data pipeline performance bottlenecks, and data governance challenges. Cloud Technologies : Work extensively with cloud-based data platforms (AWS, Azure, GCP) and their respective data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Azure Synapse, GCS, Dataflow, BigQuery). Automation & Monitoring : Implement automation for data pipeline orchestration, monitoring, and alerting to ensure data reliability and availability. Mentorship : Mentor junior data engineers, provide technical guidance, and contribute to best practices and architectural decisions within the data team. Collaboration : Work closely with cross-functional teams, including product, engineering, and business intelligence, to deliver data solutions that meet business needs. Documentation : Create and maintain comprehensive documentation for data pipelines, data models, and data Qualifications : Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 5+ years of professional experience in data engineering, with a strong focus on building and optimizing data pipelines and data warehousing solutions. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred. Extensive experience with SQL and relational databases. Demonstrated experience with cloud data platforms (AWS, Azure, or GCP) and their relevant data services. Strong understanding of data warehousing concepts (e.g., Kimball methodology, OLAP, OLTP) and experience with data modeling techniques. Experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka). Familiarity with version control systems (e.g., Skills : Experience with specific data warehousing solutions like Snowflake, Redshift, or Google BigQuery. Knowledge of containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines for data solutions. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). Understanding of machine learning concepts and how data engineering supports ML workflows. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team in a fast-paced environment (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Role Overview : We are seeking a skilled and detail-oriented Integration Consultant with 5 to 6 years of experience to join our team. The ideal candidate will have expertise in designing, building, and maintaining data pipelines and ETL workflows, leveraging tools and technologies like AWS Glue, CloudWatch, PySpark, APIs, SQL, and Python. Key Responsibilities Pipeline Creation and Maintenance : Design, develop, and deploy scalable data pipelines. Optimize pipeline performance and ensure data accuracy and integrity. ETL Development : Create ETL workflows using AWS Glue and PySpark to process and transform large datasets. Ensure compliance with data governance and security standards. Data Analysis and Processing : Write efficient SQL queries for data extraction, transformation, and reporting. Develop Python scripts to automate data tasks and improve workflows. Monitoring and Troubleshooting : Utilize AWS CloudWatch to monitor pipeline health and performance. Identify and resolve issues in a timely manner to minimize downtime. API Integration : Integrate and manage APIs to connect external data sources and services. Collaboration : Work closely with cross-functional teams to understand data requirements and provide solutions. Communicate effectively with stakeholders to ensure successful project delivery. Requirements Required Skills and Qualifications : Experience : 5 - 6 Years o9 solutions platform exp is Mandatory. Strong experience with AWS Glue and CloudWatch . Proficiency in PySpark , Python , and SQL . Hands-on experience with API integration and management. Solid understanding of ETL processes and pipeline creation. Strong analytical and problem-solving skills. Familiarity with data security and governance best practices. Preferred Skills Knowledge of other AWS services such as S3, EC2, Lambda, or Redshift. Experience with Pyspark, API, SQL Optimization, Python Exposure to data visualization tools or frameworks. Education Bachelors degree in computer science, Information Technology, or a related field. Note : For your candidature to be considered on this job, you need to apply necessarily on the company's redirected page of this job. Please make sure you apply on the redirected page as well. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role : Data Engineer Location : Bengaluru, Karnataka, India. Type : Contract/ Freelance. About The Role We're looking for an experienced Data Engineer on Contract (4-8 years) to join our data team. You'll be key in building and maintaining our data systems on AWS. You'll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. You'll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What You'll Do Design and build efficient data pipelines using Spark / PySpark / Scala. Manage complex data processes with Airflow, creating and fixing any issues with the workflows (DAGs). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript, for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes (CI/CD) using GitHub Actions. Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What You'll Need Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs. Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment (CI/CD). Good understanding of data warehousing concepts. Strong database skills OLAP/OLTP. Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Company Description Seosaph-infotech is a rapidly growing company in customized software development, providing advanced technology solutions and trusted services across multiple business verticals. In Just Two Years, Seosaph-infotech Has Delivered Exceptional Solutions To Industries Such As Finance, Healthcare, And E-commerce, Establishing Itself As a Reliable IT Partner For Businesses Seeking To Enhance Their Technological Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. Governs data design/modelling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI needed : Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / domains ). Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Location : Remote. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies