Jobs
Interviews

3632 Redshift Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hi, Looking for Skilled AWS Data Engineer, Interested candidates please find the JD below: • Bachelor’s Degree and 3+ years’ experience in the implementation of modern data ecosystems in AWS/Cloud platforms. • 3+ years’ experience in Python, SQL. • 3+ years’ experience in Lambda • 3+ years’ experience in Glue • 3+ years’ experience in Redshift, Aurora, RDS, S3 and other AWS services. • Experience in Security, Policies and IAM.

Posted 4 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We are seeking a highly experienced AWS Data Solution Architect to lead the design and implementation of scalable, secure, and high-performance data architectures on the AWS cloud. The ideal candidate will have a deep understanding of cloud-based data platforms, analytics, and best practices for optimizing data pipelines and storage. You will work closely with data engineers, business stakeholders, and cloud architects to deliver robust data solutions. Key Responsibilities: 1. Architecture Design and Planning: Design scalable and resilient data architectures on AWS that include data lakes, data warehouses, and real-time processing. Architect end-to-end data solutions leveraging AWS services such as S3, Redshift, RDS, DynamoDB, Glue, and Lake Formation. Develop multi-layered security frameworks for data protection and governance. 2. Data Pipeline Development: Build and optimize ETL/ELT pipelines using AWS Glue, Data Pipeline, and Lambda. Integrate data from various sources like RDBMS, NoSQL, APIs, and streaming platforms. Ensure high availability and real-time processing capabilities for mission-critical applications. 3. Data Warehousing and Analytics: Design and optimize data warehouses using Amazon Redshift or Snowflake. Implement data modeling, partitioning, and indexing for optimal performance. Create analytical models to drive business insights and data-driven decision-making. 4. Real-time Data Processing: Implement real-time data processing using AWS Kinesis, Kafka, or MSK. Architect solutions for event-driven architectures with Lambda and EventBridge. 5. Security and Compliance: Implement best practices for data security, encryption, and access control using IAM, KMS, and Lake Formation. Ensure compliance with regulatory standards like GDPR, HIPAA, and CCPA. 6. Monitoring and Optimization: Monitor performance, optimize costs, and enhance the reliability of data pipelines and storage. Set up observability with AWS CloudWatch, X-Ray, and CloudTrail. Troubleshoot issues and ensure business continuity with automated recovery mechanisms. 7. Documentation and Best Practices: Create detailed architecture diagrams, data flow mappings, and documentation for reference. Establish best practices for data governance, architecture design, and deployment. 8. Collaboration and Leadership: Work closely with data engineers, application developers, and DevOps teams to ensure seamless integration. Act as a technical advisor to business stakeholders for cloud-based data solution Regulatory Compliance Reporting Experience The architect should be able to resolve complex challenges due to the strict regulatory environment in India and the need to balance compliance with operational efficiency. Key complexities include: a) Building data segregation and Access Control capability: This requires in-depth understanding of data privacy laws, Amazon’s global data architecture, and the ability to design systems that can segregate and control access to sensitive payment data without compromising functionality. b) Integrating diverse data sources into Secure Redshift Cluster (SRC) data which involves working with multiple teams and systems, each with its own data structure and transfer protocols. c) Instrumenting additional UPI data elements collaborating with UPI tech teams and a deep understanding of UPI transaction flows to ensure accurate and compliant data capture. d) Automating Law Enforcement Agency (LEA) and Financial Intelligence Unit (FIU) reporting: This involves creating secure, automated pipelines for highly sensitive data, ensuring accuracy and timeliness while meeting strict regulatory requirements. The Architect will be extending from India-specific solutions to serving worldwide markets. Complexities include: a) Designing a unified data storage and compute architecture requiring harmonizing diverse tech stacks and data logging practices across multiple countries while considering data sovereignty laws and cost implications of cross-border data transfers. b) Setting up comprehensive datamarts covering metrics and dimensions involving standardizing metric definitions across markets, ensuring data consistency, and designing for scalability to accommodate future growth. c) Enabling customer segmentation across power-up programs that requires integrating data from diverse programs while maintaining data integrity and respecting country-specific data usage regulations. d) Managing time zone challenges :Synchronizing data across multiple time zones requires innovative solutions to ensure timely data availability without compromising completeness or accuracy. e) Navigating regulatory complexities: Designing systems that comply with varying and evolving data regulations across multiple countries while maintaining operational efficiency and flexibility for future changes.

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location: Gurgaon tbo.com Office Address: Floor 22, Tower C, Epitome Building No. 5,DLF Cyber city, DLF phase 2,Gurgaon - 122002, Haryana, India TBO – Travel Boutique Online Group –(www.tbo.com) TBO is a global platform that aims to simplify all buying and selling travel needs of travel partners across the world. The proprietary technology platform aims to simplify the demands of the complex world of global travel by seamlessly connecting the highly distributed travel buyers and travel suppliers at scale. The TBO journey began in 2006 with a simple goal – to address the evolving needs of travel buyers and suppliers, and what started off as a single product air ticketing company, has today become the leading B2A (Business to Agents) travel portal across the Americas, UK & Europe, Africa, Middle East, India, and Asia Pacific. Today, TBO’s product range from air, hotels, rail, holiday packages, car rentals, transfers, sightseeing, cruise, and cargo. Apart from these products, our proprietary platform relies heavily on AI/ML to offer unique listings and products, meeting specific requirements put forth by customers, thus increasing conversions. TBO’s approach has always been technology-first and we continue to invest on new innovations and new offerings to make travel easy and simple. TBO’s travel APIs are serving large travel ecosystems across the world while the modular architecture of the platform enables new travel products while expanding across new geographies. Why TBO: • You will influence & contribute to “Building World Largest Technology Led Travel Distribution Network” for a $ 9 Trillion global travel business market. • We are the emerging leaders in technology led end-to-end travel management, in the B2B space. • Physical Presence in 47 countries with business in 110 countries. • We are reputed for our-long lasting trusted relationships. We stand by our eco system of suppliers and buyers to service the end customer. • An open & informal start-up environment which cares. What TBO offers to a Life Traveller in You: • Enhance Your Leadership Acumen. Join the journey to create global scale and ‘World Best’. • Challenge Yourself to do something path breaking. Be Empowered. The only thing to stop you will be your imagination. • As we enter the last phase of the pandemic; travel space is likely to see significant growth. Witness and shape this space. It will be one exciting journey. We are a tech-driven organization focused on leveraging data, AI, and scalable cloud infrastructure to drive impactful business decisions. We are looking for a highly skilled and experienced Head of Data Science and Engineering with a strong background in machine learning, AI , and big data architecture , ideally from a top-tier engineering Key Responsibilities: Design, develop, and maintain robust, scalable, and high-performance data pipelines and ETL processes. Architect and implement large-scale data infrastructure using tools such as Spark, Kafka, Airflow, and cloud platforms (AWS/GCP/Azure). Deploy machine learning models into production. Optimize data workflows to handle structured and unstructured data across various sources. Develop and maintain metadata management, data quality checks, and observability. Drive best practices in data engineering, data governance, and model monitoring. Mentor junior team members and contribute to strategic technology decisions. Must-Have Qualifications: 10+ years of experience in data engineering/Scientist, data architecture, or a related domain. Strong expertise in Python/Scala/Java and SQL. Proven experience with big data tools (Spark, Hadoop, Hive), streaming systems (Kafka, Flink), and workflow orchestration tools (Airflow, Prefect). Deep understanding of data modeling , data warehousing , and distributed systems . Strong exposure to ML/AI pipelines , MLOps, and model lifecycle management. Experience with cloud platforms such as AWS (S3, Redshift, Glue) , GCP (BigQuery, Dataflow) , or Azure (Data Lake, Synapse) . Graduate/Postgraduate from a premium engineering institute (IITs, NITs, BITS, etc.) . Exposure to Statistical Modeling around pricing and churn management is a plus Exposure to fine-tuning LLMs is a plus

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1838 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills, hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus Your future duties and responsibilities: Expert level analytical and problem solving skills; able to show flexibility regarding testing. Awareness of Quality Management tools and techniques. Ensures best practice quality assurance of deliverables; understands & works within agreed architectural process; data and organizational frameworks. Advanced communication skills; fluent in English (written/verbal) and local language as appropriate. Open minded; able to share information; transfer knowledge and expertise to team members Required qualifications to be successful in this role: Must have skills: ETL, SQL, Hands on testing of data pipelines, Glue, S3, Redshift, data lineage, data integrity Good to have skills: Experience testing financial data is a plus. Skills: Apache Spark Python SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0725-1837 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Lead Data Engineer and Developer Position: Tech Lead Experience:8+ Years Category: Software Development Main location: Hyderabad, Chennai Position ID: J0625-0503 Employment Type: Full Time Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert AWS Airflow - Intermediate (Nice of have) Python - Intermediate (Must have or Pyspark knowledge) Your future duties and responsibilities: Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. Required qualifications to be successful in this role: Must have Skills: SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert Python - Intermediate (Must have or Pyspark knowledge) Good to have skills: AWS Airflow - Intermediate (Nice of have) Skills: Apache Spark Python SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Technical Sales Support Engineer Location: Bengaluru, Karnataka Experience: 5+ Years Education: Bachelor’s degree in computer science, Engineering, or related field About the Role: As a Technical Sales Support Engineer for the Global Technical Sales Environment, you will be responsible for the management and optimization of cloud resources that support technical sales engagements. This role involves provisioning, maintaining, and enhancing the infrastructure required for POCs, workshops, and product demonstrations for the technical sales community. Beyond infrastructure management, you will play a critical role in automation, driving efficient deployments, optimizing cloud operations, and developing tools to enhance productivity. Security will be a key focus, requiring proactive identification and mitigation of vulnerabilities to ensure compliance with enterprise security standards. Expertise in automation, scripting, and infrastructure development will be essential to deliver scalable, secure, and high-performance solutions, supporting customer, prospect, and partner engagements. Key Responsibilities: Cloud Infrastructure Management & Support of TechSales activities: Install, upgrade, configure, and optimize the Informatica platform, both on-premises and Cloud platform runtime environments. Manage the configuration, security, and networking aspects of Informatica Cloud demo platforms and resources. Coordinate with Cloud Trust Operations to ensure smooth implementation of Informatica Cloud Platform changes. Monitor cloud environments across AWS, Azure, GCP, and Oracle Cloud to detect potential issues and mitigate risks proactively. Analyse cloud resource utilization and implement cost-optimization strategies while ensuring performance and reliability. Security & Compliance: Implement security best practices, including threat monitoring, server log audits, and compliance measures. Work towards identifying and mitigating vulnerabilities to ensure a robust security posture. Automation & DevOps Implementation: Automate deployments and streamline operations using Bash/Python, Ansible, and DevOps methodologies. Install, manage, and maintain Docker containers to support scalable environments. Collaborate with internal teams to drive automation initiatives that enhance efficiency and reduce manual effort. Technical Expertise & Troubleshooting: Apply strong troubleshooting skills to diagnose and resolve complex issues on Informatica Cloud demo environments, Docker containers, and Hyperscalers (AWS, Azure, GCP, OCI). Maintain high availability and performance of the Informatica platform and runtime agent. Manage user roles, access controls, and permissions within the Informatica Cloud demo platform. Continuous Learning & Collaboration: Stay updated on emerging cloud technologies and automation trends through ongoing professional development. Work closely with Informatica support to drive platform improvements and resolve technical challenges. Scheduling & On-Call Support: Provide 24x5 support as per business requirements, ensuring seamless operations. Role Essentials: Automation & DevOps Expertise: Proficiency in Bash/Python scripting. Strong understanding of DevOps principles and CI/CD pipelines. Hands-on experience in automation tools like Ansible. Cloud & Infrastructure Management: Experience in administering Cloud data management platforms and related SaaS. Proficiency in Unix/Linux/Windows environments. Expertise in cloud computing platforms (AWS, Azure, GCP, OCI). Hands-on experience with Docker, Containers, and Kubernetes. Database & Storage Management: Experience with relational databases (MySQL, Oracle, Snowflake). Strong SQL skills for database administration and optimization. Monitoring & Observability: Familiarity with monitoring tools such as Grafana. Education & Experience: BE or equivalent educational background, with a combination of relevant education and experience being considered. Minimum 5+ years of relevant professional experience. This role offers an opportunity to work in a dynamic, cloud-driven, and automation-focused environment, contributing to the seamless execution of technical sales initiatives. Preferred Skills: Experience in administering Informatica Cloud (IDMC) and related products. Experience with storage solutions like Snowflake, Databricks, Redshift, Azure Synapse and improving database performance. Hands-on experience with Informatica Platform (On-Premises).

Posted 4 days ago

Apply

0.0 - 18.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Job Information Date Opened 07/24/2025 Job Type Full time Work Experience 10-18 years Industry IT Services Number of Positions 1 City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001 About Us Why a career in Zuci is unique! Constant attention is the source of our perfection. We fundamentally believe that building a career is all about consistency. If you jog or walk for a few days, it won’t bring in big results. If you do the right things every day for hundreds of days, you'll become lighter, more flexible, and you'll start enjoying your work and life more. Our customers trust us because of our unwavering consistency. Enabling us to deliver high-quality work and thereby give our customers and Team Zuci the best shot at extraordinary outcomes. Do you see the big picture? Is Digital Engineering your forte? Job Description Solution Architect (Data & AI Focus) Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description : We are seeking a highly experienced and strategic Solution Architect with a strong focus on Data and AI. This role is critical for designing comprehensive, scalable, and robust technical solutions that integrate data engineering, business intelligence, and data science capabilities. The ideal candidate will be a thought leader in enterprise architecture, capable of translating complex business requirements into technical blueprints, guiding cross-functional teams, and ensuring the successful implementation of end-to-end data and AI solutions. Responsibilities Define the overall technical architecture for data and AI solutions, ensuring alignment with business strategy and enterprise architectural principles. Design end-to-end data pipelines, data warehouses, data lakes, and AI/ML platforms, considering scalability, security, performance, and cost-effectiveness. Provide technical leadership and guidance to Data Engineering, Business Intelligence, and Data Science teams, ensuring adherence to architectural standards and best practices. Collaborate extensively with pre-sales, sales, marketing, and other department units to understand business needs, define technical requirements, and present solution proposals to clients and internal stakeholders. Evaluate and recommend appropriate technologies, tools, and platforms (open source, commercial, cloud) for various data and AI initiatives. Identify potential technical risks and challenges, proposing mitigation strategies and ensuring solution resilience. Create detailed architectural documentation, including design specifications, data flow diagrams, and integration patterns. Stay updated with the latest architectural patterns, cloud services, and industry trends in data and AI, driving continuous improvement and innovation. Tools & Technologies Cloud Architecture : Deep expertise across at least one major cloud provider (AWS, Azure, GCP) with strong understanding of their data, analytics, and AI services. Data Platforms : Snowflake, Databricks Lakehouse, Google BigQuery, Amazon Redshift, MS Fabric Integration Patterns : API Gateways, Microservices, Event-Driven Architectures, Message Queues (Kafka, RabbitMQ, SQS, Azure Service Bus). Data Modeling : Advanced data modeling techniques (Dimensional Modeling, Data Vault, Entity-Relationship Modeling). Security & Compliance : Understanding of data security best practices, compliance regulations (GDPR, HIPAA), and cloud security frameworks. DevOps/MLOps : CI/CD pipelines, Infrastructure as Code (Terraform, CloudFormation), containerization (Docker, Kubernetes). Programming Languages: Proficiency in at least one (Python, Java, Scala) for prototyping and architectural validation. Version Control : Git.

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Support Engineer at UiPath in Jaipur, you will have the opportunity to work in a customer-facing role that involves problem-solving and assisting customers with technical issues related to the Peak platform and their deployed applications. This role is ideal for individuals with a genuine interest in technology, particularly in fields like Data Engineering, Data Science, or Platform Ops. While some basic knowledge of SQL and Python is required, this position does not involve software development. Your primary responsibilities will include troubleshooting and resolving customer issues on the Peak platform, taking ownership of problems from investigation to resolution, analyzing application logs and system outputs to identify errors, scripting to automate support tasks, monitoring system health, assisting with infrastructure security, communicating updates clearly to both internal teams and customers, contributing to internal documentation, and participating in an on-call rotation to support customers when needed. To be successful in this role, you should have a computer science degree or equivalent academic experience in technology, be proficient in Python, Bash, and SQL, have familiarity with Linux and cloud platforms like AWS, GCP, or Azure, possess strong communication skills in English, be well-organized with strong problem-solving abilities, and have excellent interpersonal skills to work effectively in a team environment. While these are preferred qualifications, UiPath encourages individuals with varying levels of experience and a passion for the job to apply. UiPath values flexibility in work arrangements, with a mix of hybrid, office-based, and remote work options available based on business needs and role requirements. Applications for this role are reviewed on a rolling basis, and there is no fixed deadline for submission, as the application window may change depending on the volume of applicants or if a suitable candidate is selected promptly. If you believe you have the drive and enthusiasm to excel in this role, we encourage you to apply and be a part of our dynamic team at UiPath.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an AWS Senior Data Engineer at our organization, you will be responsible for working with various technologies and tools to support the data engineering activities. Your primary tasks will include utilizing SQL for data querying and manipulation, developing data processing pipelines using Pyspark, and integrating data from API endpoints. Additionally, you will be expected to work with AWS services such as Glue for ETL processes, S3 for data storage, Redshift for data warehousing, Step Functions for workflow automation, Lambda for serverless computing, Cloudwatch for monitoring, and AppFlow for data integration. You should have experience with Cloud formation and administrative roles, as well as knowledge of SDLF & OF frameworks for data lifecycle management. Understanding S3 ingestion patterns and version control using Git is essential for this role. Exposure to tools like Jfrog, ADO, SNOW, Visual Studio, DBeaver, and SF inspector will be beneficial in supporting your data engineering tasks effectively. Your role will involve collaborating with cross-functional teams to ensure the successful implementation of data solutions within the AWS environment.,

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

About the Role We are looking for a New-Age Data Engineer who can go beyond traditional pipelines and work in a fast-paced AI-first eCommerce environment. You will play a pivotal role in designing, building, and maintaining robust data architectures that power integrations with multiple marketplaces (like Amazon, Flipkart, Shopify, Meesho, and others) and enable real-time decisioning through AI/ML systems. If you’re passionate about eCommerce, love solving complex data problems, and enjoy working at the intersection of APIs, cloud infrastructure, and AI, this role is for you. Key Responsibilities Design and implement scalable, fault-tolerant data pipelines for ingestion, transformation, and synchronization across multiple marketplaces Integrate and manage APIs and webhook systems for real-time data capture from external platforms Collaborate with AI/ML teams to serve structured and semi-structured data for model training and inference Build and maintain data lakes and data warehouses with efficient partitioning and schema design Ensure data reliability, accuracy, governance, and lineage tracking Automate monitoring and anomaly detection using modern observability tools Optimize data infrastructure costs on cloud platforms (AWS/GCP/Azure) Key Qualifications 2–5 years of experience in data engineering or backend systems with large-scale data Strong programming skills in Python or Scala (bonus: familiarity with TypeScript/Node.js) Deep understanding of SQL and NoSQL databases (PostgreSQL, MongoDB, etc.) Experience with distributed data processing tools like Apache Spark, Kafka, or Airflow Proficiency in using APIs, webhooks, and event-driven data architectures Experience working with cloud-native data tools (e.g., AWS Glue, S3, Redshift, BigQuery, or Snowflake) Solid grasp of data modeling, ETL/ELT design, and performance tuning Bonus: Familiarity with data versioning tools (e.g., DVC, LakeFS), AI pipelines, or vector databases Nice to Have Experience in the eCommerce or retail tech domain Exposure to MLOps and working with feature stores Knowledge of GraphQL, gRPC, or modern API paradigms Interest or prior experience in real-time recommender systems or personalization engines What We Offer Work in a high-growth, AI-native eCommerce tech environment Autonomy to choose tools, propose architectures, and shape the data roadmap Opportunity to work closely with AI scientists, product managers, and integration teams Flexible work location, inclusive culture, and ownership-driven roles 📩 Interested candidates can send their resume to : [careers@kartavyatech.com] ✉️ Subject Line: Application – Data Engineer (AI x eCommerce)

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer at Buildnetic, a Singapore HQ company located in Bangalore, you will leverage your 8 to 12 years of experience to play a crucial role in designing, implementing, and managing data infrastructure that drives data-driven decision-making processes. In this hybrid role, you will collaborate with cutting-edge technologies to construct data pipelines, architect data models, and uphold data integrity. Your key responsibilities will include designing, developing, and maintaining scalable data pipelines and architectures, working with large datasets to create efficient ETL processes, and partnering with data scientists, analysts, and stakeholders to discern business requirements. Ensuring data quality through cleaning, validation, and profiling, implementing data models for optimal performance in data warehousing and data lakes, and managing cloud data infrastructure on platforms like AWS, Azure, or GCP will be essential aspects of your role. You will work with a variety of programming languages including Python, SQL, Java, and Scala, alongside data warehousing and data lakes tools such as Snowflake, Redshift, Databricks, Hadoop, Hive, and Spark. Your expertise in data modeling techniques, ETL tools like Informatica and Talend, and management of both NoSQL and relational databases will be critical. Additionally, experience with CI/CD pipelines, Git for version control, troubleshooting complex data infrastructure issues, and proficiency in Linux/Unix systems will be advantageous. If you possess strong problem-solving skills, effective communication abilities, and prior experience working in a hybrid work environment, Buildnetic offers you an opportunity to be part of a forward-thinking company that prioritizes innovation and technological advancement. You will collaborate with a talented and collaborative team, enjoy a flexible hybrid working model, and receive a competitive salary and benefits package. If you are passionate about data engineering and eager to work with the latest technologies, we look forward to hearing from you.,

Posted 4 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description BI Analyst- (Senior Engineer/ Lead) We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for a Sr. BI Analyst / Lead who will be Supporting BI Analysts team in implementation of a new dashboard features and writing complex SQL queries to get the raw data ready for dashboarding usage. Preferred candidate should have analytics mindset to convert raw data into user friendly and dynamic dashboards along with developing Paginated Reports. This is an Individual Contributor position who can lead the team from Technical front. Responsibilities We Entrust You With Participates in peer reviews of Reports/ Dashboards created by Internal team members and ensure high standard as per defined reporting/dashboarding standards. Designing Product thinking, Problem solving, Strategic Orientation Must have expertise on Apache SuperSet BI Tools and SSRS. Excellent skills for SSRS, SSIS and Expert in SQL Scripts. Nice to have, Sound knowledge on AWS QuickSight, Powershell Excellent SQL Scripting for complex queries Proficient in both verbal and non-verbal communication Knowledge in ETL Concept and tools e.g. Talend/SSIS Knowledge in Query Optimization in SQL and Redshift Nice to have, Sound knowledge on Data Warehousing and Data Lake Concepts Understands requirement of a Dashboard/Report from Management stake holders and has analytical view to design dynamic dashboards using any BI Analytics tool Required Skills : TSQL, ANSI SQL, PSQL, SSIS, SSRS, Apache Superset, AWS Redshift, QuickSight Good to have skills : Data Lake concepts Analytical Ability, Business and Merchant requirement understanding What Matters In This Role Apache Superset, AWS QuickSight, SSRS, SSIS for developing Dashboards is preferred Excellent TSQL, ANSI SQL, Data Modeling and Querying from multiple Data Stores is mandatory. Experience on Microsoft SSRS and SSIS is needed for developing Paginated Dashboards What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do (ref:hirist.tech)

Posted 4 days ago

Apply

4.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Data Engineer We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development : Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do (ref:hirist.tech)

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a Data Architect at Beinex located in Kochi, Kerala, you will be responsible for collaborating with the Sales team to build RFPs, Pre-sales activities, Project Delivery, and support. Your role will involve delivering on-site technical engagements with customers, participating in pre-sales visits, understanding customer requirements, defining project timelines, and implementing solutions. Additionally, you will work on both on and off-site projects to assist customers in migrating from their existing data warehouses to Snowflake and other databases. You should have at least 8 years of experience in IT platform implementation, development, DBA, and Data Migration in Relational Database Management Systems (RDBMS). Furthermore, you should possess 5+ years of hands-on experience in implementing and performance tuning MPP databases. Proficiency in Snowflake, Redshift, Databricks, or Azure Synapse is essential, along with the ability to prioritize projects effectively. Experience in analyzing Data Warehouses such as Teradata, Netezza, Oracle, and SAP will be valuable in this role. Your responsibilities will also include designing database environments, analyzing production deployments, optimizing performance, writing SQL, stored procedures, conducting Data Validation and Data Quality tests, and planning migrations to Snowflake. You will be expected to possess strong communication skills, problem-solving abilities, and the capacity to work effectively both independently and as part of a team. At Beinex, you will have access to various perks including comprehensive health plans, learning and development opportunities, workation and outdoor training, a hybrid working environment, and on-site travel opportunities. Join us to be a part of a dynamic team and advance your career in a supportive and engaging work environment.,

Posted 4 days ago

Apply

10.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Description We are looking for a skilled and experienced ETL Engineer to join our growing team at Grazitti Interactive. In this role, you will be responsible for building and managing scalable data pipelines across traditional and cloud-based platforms. You will work with structured and unstructured data sources, leveraging tools such as SQL Server, Snowflake, Redshift, and BigQuery to deliver high-quality data solutions. If you have hands-on experience in Python, PySpark, and cloud platforms like AWS or GCP, along with a passion for transforming data into insights, we’d love to connect with you. Key Skills Strong experience (4–10 years) in ETL development using platforms like SQL Server, Oracle, and cloud environments like Amazon S3, Snowflake, Redshift, Data Lake, and Google BigQuery. Proficient in Python, with hands-on experience creating data pipelines using APIs. Solid working knowledge of PySpark for large-scale data processing. Ability to output results in various formats, including JSON, data feeds, and reports. Skilled in data manipulation, schema design, and transforming data across diverse sources. Strong understanding of core AWS/Google Cloud Services and basic cloud architecture. Capable of developing, deploying, and debugging cloud-based data assets. Expert-level proficiency in SQL with a solid grasp of relational and cloud-based databases. Excellent ability to understand and adapt to evolving business requirements. Strong communication and collaboration skills, with experience in onsite/offshore delivery models. Familiarity with Marketo, Salesforce, Google Analytics, and Adobe Analytics. Working knowledge of Tableau and Power BI for data visualization and reporting. Roles And Responsibilities Design and implement robust ETL processes to ensure data integrity and accuracy across systems. Develop reusable data solutions and optimize performance across traditional and cloud environments. Collaborate with cross-functional teams, including data analysts, marketers, and engineers, to define data requirements and deliver insights. Take ownership of end-to-end data pipelines, from requirement gathering to deployment and monitoring. Ensure compliance with internal QMS and ISMS standards. Proactively report any data incidents or concerns to reporting managers.

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As an integral part of our Data Automation & Transformation team, you will experience unique challenges every day. We are looking for someone with a positive attitude, entrepreneurial spirit, and a willingness to dive in and get things done. This role is crucial to the team and will provide exposure to various aspects of managing a banking office. In this role, you will focus on building curated Data Products and modernizing data by moving it to SNOWFLAKE. Your responsibilities will include working with Cloud Databases such as AWS and SNOWFLAKE, along with coding languages like SQL, Python, and Pyspark. You will analyze data patterns across large multi-platform ecosystems and develop automation solutions, analytics frameworks, and data consumption architectures utilized by Decision Sciences, Product Strategy, Finance, Risk, and Modeling teams. Ideally, you should have a strong analytical and technical background in financial services, particularly in small business banking or commercial banking segments. Your key responsibilities will involve migrating Private Client Office Data to Public Cloud (AWS and Snowflake), collaborating closely with the Executive Director of Automation and Transformation on new projects, and partnering with various teams to support data analytics needs. You will also be responsible for developing data models, automating data assets, identifying technology gaps, and supporting data integration projects with external providers. To qualify for this role, you should have at least 3 years of experience in analytics, business intelligence, data warehousing, or data governance. A Master's or Bachelor's degree in a related field (e.g., Data Analytics, Computer Science, Math/Statistics, or Engineering) is preferred. You must have a solid understanding of programming languages such as SQL, SAS, Python, Spark, Java, or Scala, and experience in building relational data models across different technology platforms. Excellent communication, time management, and multitasking skills are essential for this role, along with experience in data visualization tools and compliance with regulatory standards. Knowledge of risk classification, internal controls, and commercial banking products and services is desirable. Preferred qualifications include experience with Big Data and Cloud platforms, data wrangling tools, dynamic reporting applications like Tableau, and proficiency in data architecture, data mining, and analytical methodologies. Familiarity with job scheduling workflows, code versioning software, and change management tools would be advantageous.,

Posted 4 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description As a Sr. Associate, you will work closely with internal and external stakeholders and deliver high quality analytics solutions to real-world Pharma commercial organization’s business problems. You will bring deep Pharma / Healthcare domain expertise and use cloud data tools to help solve complex problems Key Responsibilities Collaborate with internal teams and client stakeholders to deliver Business Intelligence solutions that support key decision-making for the Commercial function of Pharma organizations. Leverage deep domain knowledge of pharmaceutical sales, claims, and secondary data to structure and optimize BI reporting frameworks. Develop, maintain, and optimize interactive dashboards and visualizations using BI tools like Power BI and Qlik, to enable data-driven insights. Translate business requirements into effective data visualizations and actionable reporting solutions tailored to end-user needs. Write complex SQL queries and work with large datasets housed in Data Lakes or Data Warehouses to extract, transform, and present data efficiently. Conduct data validation, QA checks, and troubleshoot stakeholder-reported issues by performing root cause analysis and implementing solutions. Collaborate with data engineering teams to define data models, KPIs, and automate data pipelines feeding BI tools. Manage ad-hoc and recurring reporting needs, ensuring accuracy, timeliness, and consistency of data outputs. Drive process improvements in dashboard development, data governance, and reporting workflows. Document dashboard specifications, data definitions, and maintain data dictionaries. Stay up to date with industry trends in BI tools, visualization of best practices and emerging data sources in the healthcare and pharma space. Prioritize and manage multiple BI project requests in a fast-paced, dynamic environment. Qualifications 2–4 years of experience in BI development, reporting, or data visualization, preferably in the pharmaceutical or life sciences domain. Strong hands-on experience building dashboards using Power BI, and Qlik. Advanced SQL skills for querying and transforming data across complex data models. Familiarity with pharma data such as Sales, Claims, and secondary market data is a strong plus. Experience in data profiling, cleansing, and standardization techniques. Ability to translate business questions into effective visual analytics. Strong communication skills to interact with stakeholders and present data insights clearly. Self-driven, detail-oriented, and comfortable working with minimal supervision in a team-oriented environment. Exposure to data warehousing concepts and cloud data platforms (e.g., Snowflake, Redshift, or BigQuery) is an advantage. Education Bachelor’s or Master’s Degree (computer science, engineering or other technical disciplines)

Posted 4 days ago

Apply

7.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job DescriptionJob Description For Consultant - Data Engineer Key Responsibilities and Core Competencies: You will be responsible for managing and delivering multiple Pharma projects. Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. Good collaboration with onshore and Senior folks. Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). Exposure to Project management methodologies including Agile and Waterfall. Experience working in RFPs would be a plus. Required Technical Skills Proficient in Python, Pyspark, SQL Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc. Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. Good experience in ETL, Data Modelling, building ETL Pipelines. Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. Sound knowledge in Data operations, quality and data governance. Preferred Qualifications Bachelor’s or master’s Engineering/ MCA or equivalent degree. 7-9 years of experience as Data Engineer, with atleast 2 years in managing medium to large scale programs. Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills. Location Preferably Hyderabad, India About Us Chryselys is a US based Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. Chryselys was founded in the heart of US Silicon Valley in November 2019 with the vision of delivering high-value business consulting, solutions, and services to clients in the healthcare and life sciences space. We are trusted partners for organizations that seek to achieve high-impact transformations and reach their higher-purpose mission. Chryselys India supports our global clients to achieve high-impact transformations and reach their higher-purpose mission. Please visit https://www.linkedin.com/company/chryselys/mycompany/ https://chryselys.com/ for more details.

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career paths, and steady growth prospects with great scope to innovate. We aim to create an ecosystem of easily configurable data applications focused on storytelling for public and private use. Data Architect We are seeking an experienced Data Architect to design and govern scalable, secure, and efficient data platforms in a data mesh environment. You will lead data architecture initiatives across multiple domains, enabling self-serve data products built on Databricks and AWS, and support both operational and analytical use cases. Key Responsibilities Design and implement enterprise-grade data architectures leveraging the medallion architecture (Bronze, Silver, Gold). Develop and enforce data modelling standards, including flattened data models optimized for analytics. Define and implement MDM strategies (Reltio), data governance frameworks (Collibra), and data classification policies. Lead the development of data landscapes, capturing sources, flows, transformations, and consumption layers. Collaborate with domain teams to ensure consistency across decentralized data products in a data mesh architecture. Guide best practices for ingesting and transforming data using Fivetran, PySpark, SQL, and Delta Live Tables (DLT). Define metadata and data quality standards across domains. Provide architectural oversight for data platform development on Databricks (Lakehouse) and AWS ecosystem. Key Skills & Qualifications Must-Have Technical Skills: (Reltio, Colibra, Ataccama, Immuta) Experience in the Pharma domain. Data Modeling (dimensional, flattened, common data model, canonical, and domain-specific, entity-level data understanding from a business process point of view). Master Data Management (MDM) principles and tools (Reltio) (1). Data Governance and Data Classification frameworks (1). Strong experience with Fivetran**, PySpark, SQL, Python. Deep understanding of Databricks (Delta Lake, Unity Catalog, Workflows, DLT) . Experience with AWS services related to data (e.g., S3, Glue, Redshift, IAM, ). Experience on Snowflake. Architecture & Design Proven expertise in Data Mesh or Domain-Oriented Data Architecture. Experience with medallion/lakehouse architecture. Ability to create data blueprints and landscape maps across complex enterprise systems. Soft Skills Strong stakeholder management across business and technology teams. Ability to translate business requirements into scalable data designs. Excellent communication and documentation skills. Preferred Qualifications Familiarity with regulatory and compliance frameworks (e.g., GxP, HIPAA, GDPR). Background in data product building. About Us We consult and deliver solutions to organizations where data is the core of decision-making. We undertake strategic data consulting for organizations, laying out the roadmap for data-driven decision-making. This helps organizations convert data into a strategic differentiator. Through a host of our products, solutions, and Service Offerings, we analyze and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role

Posted 4 days ago

Apply

2.0 - 4.0 years

9 - 18 Lacs

Chennai

Remote

Role & responsibilities Bachelors degree in Computer Science, Engineering, Information Technology, or a related field (or equivalent practical experience). Proven experience as a Data Engineer, Data Architect, or similar role in a data-driven environment. Proficiency in programming languages such as NodeJS , Python or Springboot . Strong SQL skills, with experience in database management (e.g ., MS SQL Server , PostgreSQL , Redshift , BigQuery , etc.). Experience with Azure cloud platforms particularly in data storage and processing services. Hands-on experience with ETL tools and frameworks (e.g., Apache Kafka, Apache Airflow, Talend, etc.). Familiarity with data warehousing solutions and data modeling techniques. Knowledge of big data technologies (e.g., Hadoop, Spark, etc.) and Machine Learning is a plus. Strong understanding of data security principles and best practices. Ability to optimize query performance for Millions of rows of data.

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 08/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335386

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build and manage the largest petabyte-scale data infrastructure in India for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Data Platform. He/she will own and build data pipelines, automations and solutions to ensure the availability, system efficiency, IMR efficiency, scaling, expansion, operations and compliance of the data platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in managing large data infrastructures, working with petabyte-scale data, building scalable data solutions/automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a Platform requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and ‘gets work done’ in business time. Key job responsibilities Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of Amazon IN. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3044196

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build and manage the largest petabyte-scale data infrastructure in India for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Data Platform. He/she will own and build data pipelines, automations and solutions to ensure the availability, system efficiency, IMR efficiency, scaling, expansion, operations and compliance of the data platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in managing large data infrastructures, working with petabyte-scale data, building scalable data solutions/automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a Platform requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and ‘gets work done’ in business time. Key job responsibilities Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of Amazon IN. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3044205

Posted 5 days ago

Apply

8.0 - 13.0 years

15 - 22 Lacs

Chennai

Work from Office

Technical specifications 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Experience in managing support for modern data platforms across Azure, Databricks, Fabric, or Snowflake environments. Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Circulation Limited: Internal Version 1.0 June 2025 2 Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones. 3.1 Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high- quality delivery and adherence to standards. Support transition from project mode to Managed Services including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. 3.2 Required Skillset: ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). 3.3 Behavioural Competencies: At JMAN, we expect our team members to embody the following: Self-Driven & Proactive Own delivery and service outcomes, ensure proactive communication, and manage expectations confidently. Circulation Limited: Internal Version 1.0 June 2025 3 Adaptability & Resilience Thrive in a high-performance, entrepreneurial environment and navigate dynamic challenges effectively. Operational Excellence Be process-oriented and focused on SLA adherence, documentation, and delivery consistency. Agility & Problem Solving Adapt quickly to changing priorities, debug effectively, and escalate when needed with clarity. Commitment & Engagement Ensure timesheet compliance, attend meetings regularly, follow company policies, and actively participate in org-wide initiatives. Teamwork & Collaboration Share knowledge, support colleagues, and contribute to talent retention and team success. Professionalism & Continuous Improvement Maintain a professional demeanour and commit to ongoing learning and self-improvement. Mentoring & Knowledge Sharing Guide and support junior team members, fostering a culture of continuous learning and professional growth. Advocacy & Organizational Citizenship Represent JMAN positively, uphold company values, respect others, and honour commitments, including punctuality and timely delivery.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies