Jobs
Interviews

3678 Redshift Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

AWS Data Engineer Primary Skills AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process, SQL, Databricks Detailed JD Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Responsibility Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Steward With Unity Catalog, Databricks Exp :7+yrs Location : Chennai /Hyderabad Mode: WFO · Perform data profiling and structural analysis to identify critical data elements, definitions, and usage patterns. · Develop and maintain comprehensive documentation including policies, standards, manuals, and process flows to support data governance. · Collaborate with business SMEs to define data domains, establish data products, and identify data stewards and governance artifacts. · Ensure all data management practices align with established data governance policies, procedures, and compliance requirements. · Contribute to the design, development, and deployment of data systems by combining technical expertise with hands-on implementation. · Define and enforce data standards in collaboration with stakeholders to optimize data collection, storage, access, and utilization. · Lead the implementation of data governance frameworks encompassing data quality, metadata management, and data lineage. · Drive data stewardship programs and provide expert guidance on governance best practices to business and technical teams. · Manage and operate data governance platforms such as Collibra, Infosphere, Erwin, and Unity Catalog. · Design and automate data quality metrics, dashboards, and reports to monitor governance effectiveness and support continuous improvement. Basic Qualifications · Bachelor’s degree in data science, Computer Science, Information Management, or related field. · Minimum 7 years of professional experience in Data Governance or related data management roles. · Strong knowledge of data governance frameworks, data quality management, and compliance standards. · Hands-on experience with Databricks and Unity Catalog. · Experience with governance tools such as Collibra, Infosphere, Erwin, or similar platforms. Preferred Qualifications · Certifications in data governance or data management (e.g., Certified Information Management Professional - CIMAP · Familiarity with data privacy regulations such as GDPR, CCPA. · Experience with AWS cloud services such as S3, Redshift etc. · Proficiency in SQL, Python, or other scripting languages for data analysis and automation. · Prior experience working in life science industry is a plus.

Posted 2 weeks ago

Apply

7.0 years

6 - 10 Lacs

Noida

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills and Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

0 years

6 - 7 Lacs

Indore

Remote

Cloud Platform: Amazon Web Services (AWS) – The backbone providing robust, scalable, and secure infrastructure. Ingestion Layer (Data Ingestion Frameworks): o Apache Nifi: For efficient, real-time data routing, transformation, and mediation from diverse sources. o Data Virtuality: Facilitates complex ETL and data virtualization, creating a unified view of disparate data. Data Frameworks (Data Processing & Microservices): o Rules Engine & Eductor (In-house Tools - Scala, Python): Our proprietary microservices for specialized data handling and business logic automation. o Kafka: Our high-throughput, fault-tolerant backbone for real-time data streaming and event processing. Analytics Layer (Analytics Services & Compute): o Altair: For powerful data visualization and interactive analytics. o Apache Zeppelin: Our interactive notebook for collaborative data exploration and analysis. o Apache Spark: Our unified analytics engine for large-scale data processing and machine learning workloads. Data Presentation Layer (Client-facing & APIs): o Client Services (React, TypeScript): For dynamic, responsive, and type-safe user interfaces. o Client APIs (Node.js, Nest.js): For high-performance, scalable backend services. Access Layer: o API Gateway (Amazon API Gateway): Manages all external API access, ensuring security, throttling, and routing. o AWS VPN (Clients, Site-to-Site, OpenVPN): Secure network connectivity. o Endpoints & Service Access (S3, Lambda): Controlled access to core AWS services. o DaaS (Data-as-a-Service - Dremio, Data Virtuality, PowerBI): Empowering self-service data access and insights. Security Layer: o Firewall (AWS WAF): Protects web applications from common exploits. o IdM, IAM (Keycloak, AWS Cognito): Robust identity and access management. o Security Groups & Policy (AWS): Network-level security and granular access control. o ACLs (Access Control Lists - AWS): Fine-grained control over network traffic. o VPCs (Virtual Private Clouds - AWS): Isolated and secure network environments. Data Layer (Databases & Storage): o OpenSearch Services: For powerful search, analytics, and operational data visualization. o Data Warehouse – AWS Redshift: Our primary analytical data store. o Databases (PostgreSQL, MySQL, OpenSearch): Robust relational and search-optimized databases. o Storage (S3 Object Storage, EBS, EFS): Highly scalable, durable, and cost-effective storage solutions. Compute & Orchestration: o EKS (Amazon Elastic Kubernetes Services): Manages our containerized applications, providing high availability and scalability for microservices. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹50,000.00 - ₹60,000.00 per month Schedule: Monday to Friday Weekend availability Work Location: Remote

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company Cognine was formed by passionate people with engineering backgrounds, intending to provide core software engineering services to clients worldwide. Cognine is an AI-driven technology solutions provider, empowering businesses with AI, Automation, Product Engineering, and Data Intelligence. We help organizations modernize IT ecosystems, build AI-powered applications, streamline operations, and drive innovation with Generative AI and advanced analytics. We have grown from a small team in Chicago & Hyderabad to a global tech organization with 200+ engineers. The culture at Cognine embeds key values of engineering mindset, quality, and transparency into every employee. We have invested in organic growth, building on the sustainable technology strategy, design, data, and engineering capabilities required to bring in a truly integrated approach to solve our client’s toughest challenges. Our collaborative, cross-functional teams deliver tangible results, fast. About the Role Create frameworks to predict a variety of outcomes in different scenarios Create models of customer satisfaction that provide detailed insight into what causes a customer to take different actions Collaborate with other data scientists and stakeholders on projects Develop solutions in R or Python Develop production-grade solutions Work in Hadoop, Redshift, and Spark Translate business and product questions into analytics projects Communicate clearly over written and oral channels while translating complex methodologies and analytical results into high-level insights Qualifications 5-10 years of experience in a data science and/or machine learning role with deep expertise in Python. Strong experience with Azure, Azure ML, and Terraform. Experience building and managing robust CI/CD pipelines for machine learning workflows, including model training, evaluation, and deployment. Excellent verbal and written English communication skills. Preferred Skills Experience with PyTorch. Experience developing and deploying image models.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company: Indian / Global Digital Organization Key Skills: Core Java, Advanced Java, Spring Boot Roles and Responsibilities: Own and drive backend projects end-to-end, from brainstorming to production systems. Design and build cloud-native microservices that are resilient, observable, and horizontally scalable. Collaborate in a high-velocity environment with short iteration cycles and frequent A/B testing. Work with petabyte-scale data, leveraging Redshift and other AWS-native analytics tools to inform product decisions. Integrate AI/ML capabilities into product workflows for smarter, personalized user experiences. Contribute to code reviews, design reviews, and foster a strong technical culture within the team. Skills Required: Core Java (Must-Have): Strong expertise in object-oriented programming using Core Java, with experience in building enterprise-grade backend systems. Advanced Java (Nice-to-Have): Knowledge of advanced Java concepts like multithreading, concurrency, and JVM internals. Spring Boot: Experience in developing RESTful APIs and microservices using Spring Boot framework. Cloud-Native Development: Hands-on experience with cloud platforms (preferably AWS) and building scalable, distributed systems. Database & Analytics Tools: Familiarity with Redshift, relational databases (e.g., MySQL/PostgreSQL), and cloud-native analytics tools. AI/ML Integration: Exposure to integrating AI/ML models or services into backend workflows is a plus. CI/CD and DevOps: Understanding of continuous integration/deployment pipelines and tools like Jenkins, Docker, or Kubernetes. Education: Bachelor's degree in Computer Science or a related field.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Open Location - Indore, Noida, Gurgaon, Bangalore, Hyderabad, Pune Job Description 3-8 years' experience working on Data engineering & ETL/ELT processes, data warehousing, and data lake implementation with AWS services Hands on experience in designing and implementing solutions like creating/deploying jobs, Orchestrating the job/pipeline and infrastructure configurations Expertise in designing and implementing pySpark and Spark SQL based solutions Design and implement data warehouses using Amazon Redshift, ensuring optimal performance and cost efficiency. Good understanding of security, compliance, and governance standards. Roles & Responsibilities Design and implement robust and scalable data pipelines using AWS/Azure services Drive architectural decisions for data solutions on AWS, ensuring scalability, security, and cost-effectiveness. Hands-on experience of Develop and deploy ETL/ELT processes using Glue/Azure data factory, Lambda/Azure functions, Step function/Azure logic apps/MWAA, S3 and Lake formation from various data sources. Strong Proficiency in pySpark, SQL, Python. Proficiency in SQL for data querying and manipulation. Experience with data modelling, ETL processes, and data warehousing concepts. Create and maintain documentation for data pipelines, processes, and following best practices. Knowledge of various Spark Optimization technique, Monitoring and Automation would be a plus. Participate in code reviews and ensure adherence to coding standards and best practices. Understanding of data governance, compliance, and security best practices. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills – with understanding on stakeholder mapping Good to Have: Understanding of databricks is good to have. GenAI, Working with LLMs are good to have Mandatory Skills - AWS OR Azure Cloud, Python Programming, SQL, Spark SQL, Hive, Spark optimization techniques and Pyspark. Share resume at sonali.mangore@impetus.com with details (CTC, Expected CTC, Notice Period)

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Title: AWS Data Engineer – Python | PySpark Location: Navi Mumbai Experience: 4 – 10 years Employment Type: Full-Time Job Overview We are seeking a skilled and motivated AWS Data Engineer with hands-on experience in AWS cloud services, Python, PySpark, SQL, and Unix. The ideal candidate will have strong experience building scalable data pipelines, managing ETL workflows, and deploying data solutions in a production environment using modern cloud-native tools and DevOps practices. Key Responsibilities Design, build, and maintain scalable data pipelines using PySpark, Python, and SQL. Develop and manage ETL workflows using AWS Glue, Lambda, and EMR. Work with AWS services such as SNS, EventBridge, Redshift, and S3 to support data ingestion, transformation, and storage. Collaborate with data scientists and analysts to provide clean, reliable data in Jupyter and BI environments. Implement and maintain DevOps practices for data pipeline deployment and monitoring. Troubleshoot data and system issues, ensuring high availability and performance. Work in Unix/Linux environments to support data processing and automation tasks. Required Skills Strong programming experience in Python and PySpark Proficient in writing complex SQL queries Experience working in Unix/Linux environments In-depth knowledge of AWS data services: EMR, Glue, Lambda, SNS, EventBridge, Redshift, etc. Familiarity with DevOps tools and CI/CD practices for data pipeline deployment Experience working with Jupyter Notebooks or similar data science tools Skills: aws,etl,redshift,sns,eventbridge,devops,emr,sql,aws glue,python,data engineer,data pipeline,pyspark,jupyter notebooks,unix/linux,aws lambda

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a highly skilled LLM Engineer with a minimum of 3 to 6 years of experience in software development and 1-2 years in LLM solution development,. The ideal candidate will have strong experience working with Python, LLM solution patterns and tools (RAG, Vector DB, Agentic workflows, LoRA, etc.) cloud platforms (AWS, Azure, GCP), and DevOps tools. They will be responsible for designing and developing scalable software solutions, leading architecture design, and ensuring the performance and reliability of our systems. Responsibilities: • Take ownership of architecture design and development of scalable and distributed software systems. • Translate business to technical requirements • Own technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation • Data driven decision making skills with focus on achieving product goals • Design, develop and deploy LLM based pipelines involving patterns like RAG, Agentic workflows, PEFT (e.g. LORA, QLORA, etc.) • Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. • Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. • Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. • Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications : • Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). At least 5 years of experience in software development, with relevant work experience in LLM app development. • Strong coding skills with proficiency in Python and Javascript • Experience with API frameworks both stateless and stateful such as Fast API, Django • Well versed in implementation of web sockets, gRPC, access management using JWT (Azure AD, IDM preferred) • Proficient in cloud platforms, specifically AWS, Azure, or GCP • Knowledge and hands-on experience with front-end development (React JS, Next JS, Tailwind CSS) preferred • Strong experience in LLM patterns like RAG, Vector DB, Hybrid Search, Agent development, Agentic workflows, prompt engineering, etc. • Strong experience with LLM APIs (Open AI, Anthropic, AWS Bedrock), SDKs (Langchain, DSPy) • Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services (Redshift, RDS, S3). • Experience in production deployments involving thousands of users • Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. • Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. • Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. • Experience of SDLC and best practices while development • Experience with Agile methodology for continuous product development and delivery

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Responsibilities: Design, develop, extend and maintain end-to-end data workflows and pipelines in Dataiku DSS. Collaborate with data scientists and analysts to operationalize machine learning models. Leverage Generative AI models and tools within Dataiku to build advanced AI-powered applications and analytics solutions. Integrate Dataiku with various data sources (databases, cloud storage, APIs). Develop and optimize SQL queries and Python/R scripts for data extraction and transformation across relational and NoSQL databases Work extensively with cloud data warehouses like Amazon Redshift and/or Snowflake for data ingestion, transformation, and analytics. Implement automation and scheduling of data workflows. Monitor and troubleshoot data pipelines to ensure data quality and reliability. Document technical solutions and best practices for data processing and analytics. Required Skills and Qualifications: Proven experience of 4+ years working with Dataiku Data Science Studio (DSS) in a professional environment. Strong knowledge of data engineering concepts, ETL/ELT processes. Proficiency in Python and/or R for data manipulation and automation. Solid SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL, Oracle)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc, Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

We Are Hiring | Data Engineer (6+ Years) Happy to connect with you all from #LINCHPINZ! We have an exciting opportunity for a Data Engineer to join our growing team. If you’re passionate about data and have experience working with modern cloud and big data technologies, we’d love to hear from you! Role: Data Engineer Experience: 6+ Years Location: Chennai / Remote Job Description: We are seeking a highly skilled Senior Data Engineer with over 6 years of experience in designing, building, and maintaining large-scale data pipelines and infrastructure. You should have hands-on experience in: Cloud platforms – #Azure or #AWS Big Data Technologies – #Hadoop, #Hive Programming and Querying – #SQL, #Python, #PySpark Job Type: Full-time Pay: Up to ₹3,000,000.00 per year Application Question(s): We have opening for Pune, Bangalore, Chennai and Hyderabad location. Experience: Data Engineer: 6 years (Required) SQL: 4 years (Required) AWS glue: 4 years (Required) Pyspark: 4 years (Required) Python: 4 years (Required) Redshift: 4 years (Required) Location: Chennai, Tamil Nadu (Preferred) Work Location: Remote

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL Amazon, Earth's most customer-centric company, offers low prices, vast selection, and convenience through its world-class e-commerce platform. The Competitive Pricing team ensures customer trust through optimal pricing across all Amazon marketplaces. Within this organization, our Data Engineering team, part of the Pricing Big Data group, builds and maintains the global pricing data platform. We enable price competitiveness by processing data from multiple sources, creating actionable pricing dashboards, providing deep-dive analytics capabilities, and driving operational efficiency. As a Data Engineer, you will collaborate with technical and business teams to develop real-time data processing solutions. You will lead the architecture, design, and development of the pricing data platform using AWS technologies and modern software development principles. Your responsibilities will include architecting and implementing automated Business Intelligence solutions, designing scalable big data and analytical capabilities, and creating actionable metrics and reports for engineers, analysts, and stakeholders. In this role, you will partner with business leaders to drive strategy and prioritize projects. You'll develop and review business cases, and lead technical implementation from design to release. Additionally, you will provide technical leadership and mentoring to the data engineering team. This position offers an opportunity to make a significant impact on Amazon's pricing strategies and contribute to the company's continued growth and evolution in the e-commerce space. Key job responsibilities - Design, implement, and maintain data infrastructure for enterprise-wide analytics - Extract, transform, and load data from multiple sources using SQL and AWS big data technologies - Build comprehensive domain knowledge of Amazon's business operations and metrics - Write clear, concise documentation and communicate effectively with stakeholders across teams - Deliver results independently while meeting deadlines - Collaborate with engineering teams to solve complex data challenges - Automate reporting processes and develop self-service analytics tools for customers - Research and implement new AWS technologies to enhance system capabilities Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

DESCRIPTION Interested in the gig economy? Come be a part of it. The Amazon Flex Analytics Team is building a data platform that powers Amazon Flex worldwide. We’re working hard, having fun, and making history! We are looking for candidates who want to help shape the future of Flex. Specifically, we are looking for a Data Engineer who is passionate about data architecture and wants help us use data to understand Flex Driver behavior and satisfaction. In this role, you will develop and support the data technologies that give our teams flexible and structured access to their data, including implementation of a self-service analytics platform, defining metrics and KPIs, and automating reporting and data visualization. The successful candidate considers themselves an enterprise data architect. You should excel in the design, creation, and management of analytical data infrastructure. You will be responsible for designing and implementing scalable processes to publish data and build solutions to reconcile data for integrity and accuracy of data sets used for analysis and reporting. You should have broad understanding of RDBMS, ETL, Data Integration, Data Warehousing, Data Governance and Data Lakes. Experience with Python, R, or Spark is highly preferred and will put you at the top of the list. Key job responsibilities Develop and improve the current data architecture using AWS Redshift, AWS S3, Spark and EMR. Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability. Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale for increasing data volumes. Partner with BIEs and Analysts across teams such as product management, operations, finance, marketing and engineering to build and verify hypothesis to improve the business performance. BASIC QUALIFICATIONS 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines PREFERRED QUALIFICATIONS Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

As a Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Drive technical conversations with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Qualifications Preffered 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Or 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and other cloud products (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Required Skills: Python Programming: Strong ability to write clean and efficient code. Spark SQL: Good understanding of Spark SQL for distributed data processing. Data Processing: Experience with large datasets and structured data manipulation. SQL Fundamentals: Ability to write queries and optimize database performance. Problem-Solving: Analytical mindset to debug and optimize workflows. Preferred Skills: AWS Cloud Services: Familiarity with S3, Redshift, Lambda, EMR is an advantage. ETL Development: Understanding of ETL processes and data engineering principles. Version Control: Experience using Git for collaborative development. Big Data Tools: Exposure to Hive, PySpark , or similar technologies. Roles & Responsibilities Develop and optimize Python scripts for data processing and automation. Write efficient Spark SQL queries for handling large-scale structured data. Assist in ETL pipeline development and maintenance. Support data validation and integrity checks across systems. Collaborate with teams to implement cloud-based solutions (AWS preferred). Optimize performance of data queries and workflows. Troubleshoot and debug issues in existing applications. Document processes and ensure best practices in coding and data handling.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Amazon Consumer Gift Cards is a fast-growing, multi-billion-dollar, worldwide business with a mission to provide the world’s most desired and convenient gift to customers. We are looking for an Business Intelligence Engineer with broad technical skills to build analytic and reporting capabilities to deliver on strategic analytical/reporting projects, define/produce end-to-end metrics that inform product, business, and marketing decisions and identify new growth opportunities through data-driven insights. The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex business contexts, and, above all else, is passionate about data and analytics. The candidate is an expert with business intelligence tools and passionately partners with the business to identify strategic opportunities where data-backed insights drive value creation. An effective communicator, the candidate crisply translates analysis result into executive-facing business terms. The candidate works aptly with internal and external teams to push the projects across the finishing line. The candidate is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced and global team. Key job responsibilities Core Responsibilities Includes But Not Restricted To Interfacing with business customers, gathering requirements and delivering complete BI solutions to drive insights and inform product, operations, and marketing decisions. Interfacing with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL (Redshift, Oracle) and ability to use a programming and/or scripting language to process data for modeling Evolve organization wide Self-Service platforms Building metrics to analyze key inputs to forecasting systems Leading complex analytical deep dives (Segmentation, A/B testing) Recognizing and adopting best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Basic Qualifications Bachelor degree in computer science engineering, economics, statistics, mathematics, econometrics, or a similar quantitative field Demonstrated ability to interact with business customers, gather requirements and deliver complete scalable and sustainable BI solutions 2+ years work experience in analytics field and working with relational Databases Self-driven, and showcases ability to deliver on fast paced projects using extremely large data sets Fluency in SQL, and deep understanding of ETL is a must. Preferred Qualifications Effective spoken and written communication to senior audiences, including strong data presentation and visualization skills Experience and ability to effectively gather information from multiple data sources and deliver on ambiguous projects with incomplete or dirty data Knowledge and direct experience using business intelligence reporting tools such as Tableau/QuickSight Experience working with redshift, Cradle or other AWS tools is a plus Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3006531

Posted 2 weeks ago

Apply

5.0 years

4 - 6 Lacs

Bengaluru

On-site

Senior AWS Developer P2 C3 STS Primary Skills Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless Secondary Skills SQL JD Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Responsibility We are seeking a highly skilled Senior AWS Developer to join our team as a Senior Consultant. With a primary focus on Pega and SQL, the ideal candidate will also have experience with Agile methodologies. As a Senior AWS Developer, you will be responsible for optimizing data models for performance and efficiency, writing SQL queries to support data analysis and reporting, and designing and implementing ETL processes to load data into the data warehouse. You will also work with stakeholders to identify business needs and requirements for data-related projects and design and maintain the data architecture for all AWS data services. The ideal candidate will have at least 5 years of work experience and be comfortable working in a hybrid setting. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Overview As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

CSQ326R201 Mission At Databricks, we are on a mission to empower our customers to solve the world's toughest data problems by utilising the Data Intelligence platform. As a Scale Solution Engineer, you will play a critical role in advising Customers in their onboarding journey. You will directly work with customers to help them onboard and deploy Databricks in their Production environment. The Impact You Will Have You will ensure new customers have an excellent experience by providing them with technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions to achieve their goals. You will work on multiple tactical customers to track and report their progress. What We Look For 2+ years of industry experience Early-career technical professional ideally in data-driven or cloud-based roles. Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP is required. Knowledge of a programming language - Python, Scala, or SQL Knowledge of end-to-end data analytics workflow Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow) Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake) Excellent time management & presentation skills Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models) About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies