Jobs
Interviews

745 Amazon Redshift Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

17 - 22 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Pune

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Pune

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

6.0 - 10.0 years

14 - 24 Lacs

Hyderabad

Work from Office

Role & responsibilities Job Title: Data Engineer Years of experience: 6 to 10 years (Minimum 5 years of relevant experience) Work Mode: Work From Office Hyderabad Notice Period-Immediate to 30 Days only Key Skills: Python, SQL, AWS, Spark, Databricks - ( Mandate) Airflow- Good to have

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Pune

Work from Office

What You'll Do Avalara, Inc., (), is the leading provider of cloud-based software that delivers a broad array of compliance solutions related to sales tax and other transactional taxes. We are building cloud-based tax compliance solutions to handle every transaction in the world. Every transaction you make, physical or digital, has a unique and nuanced tax calculation that accompanies it. We do those and we want to do all of them. Avalara is building the global cloud compliance platform, and the Build and Deployment Tooling Team contributes in allowing the development of this platform. Our engineering teams are diverse in their engineering practices, culture, and background. We create the systems that allow them to produce quality products at an increasing pace. As a member of the team, you will take part in architecting the tooling that lowers the barriers for development. You will report to Manager, Site Reliability Engineer This might be a good fit for you, if Helping people do their best resonates with you. you love platform engineering you want to build cool things with cool people. you love automating everything you love building high impact tools and software which everyone depends on you love automating everything! What Your Responsibilities Will Be Some areas of work are Create tools that smooth the journey from idea to running in production Learn and promote best practices related to the build, test and deployment of software What You'll Need to be Successful Qualifications Software Engineering : Understand software engineering fundamentals and have experience developing software among a team of engineers. Experience practicing testing. Build Automation: Experience getting artifacts in many languages packaged and tested so that they can be trusted to go into Production. Automatically. Release Automation: Experience in getting artifacts running in production. Automatically. Observability : Experience developing service level indicators and goals, instrumenting software, and building meaningful alerts. Troubleshooting : Experience tracking down technical causes of distributed software. Containers/Container Orchestration Systems : A understanding of how to manage container-based systems especially on Kubernetes. Artificial Intelligence : A grounding in infrastructure for and the use of Agentic Systems. Infrastructure-as-Code : Experience deploying and maintaining infrastructure as code tools such as Terraform and Pulumi. Technical Writing : We will need to build documentation and diagrams for other engineering teams. Customer Satisfaction : Experience ensuring that code meets all functionality and acceptance criteria for customer satisfaction (our customers are other engineering teams and Avalara customers). GO : Our tooling is developed in GO Distributed Computing : Experience architecting distributed services across regions and clouds. GitLab : Experience working with, managing, and deploying. Artifactory : Experience working with, managing, and deploying. Technical Writing : write technical documents that people love and adore. Open Source: Build side-projects or contribute to other open-source projects. Experience Minimum 8 years of experience in a SaaS environment Bachelor's degree in computer science or equivalent Participate in an on-call rotation Experience with a data warehouse like Snowflake, Redshift, or Spark

Posted 1 month ago

Apply

3.0 - 5.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Overview We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together . Responsibilities Requirement gathering and evaluation of clients’ business situations in order to implement appropriate analytic solutions. Designs, generates and manages reporting frameworks that provide insight as to the performance of clients’ marketing activities across multiple channels. Be the single point of contact on anything data & analytics related to the project. QA process: Maintain, create and re-view QA plans for deliverables to align with the requirements, identify discrepancies if any and troubleshoot issues. Prioritize tasks and proactively manage workload to ensure timely delivery with high accuracy. Active contribution to project planning and scheduling. Create and maintain project specific documents such as process / quality / learning documents. Should be able to drive conversation with team, client and business stake holders Qualifications 6-9 years' experience in data management and analysis in Media or relevant domains with strong problem-solving ability. Mandatory to have Strong SQL skills with Good Exposure on ETL capabilites like Alteryx and Visualization skills in Tableau and storytelling capabilites. Prior knowledge of media/advertising domain would be beneficial Good to have Datorama API Capability (Not Mandate), Good Media domain Knowledge is good to have Prior experience in AWS(S3 and Redshift), GBQ is good to have. Exposure to other ETL tools is good to have. Strong knowledge on media metrics, custom calculations, and metrics co-relation Ability to identify and help determine key performance indicators for the clients. Strong written and verbal communication skills. Led delivery teams and projects to successful implementations Familiarity working with large data sets and creating cohesive stories. Able to work and lead successfully with teams, handling multiple projects and meeting timelines. Maintaining positive client and vendor relationships. Presentation skills using MS Power Point or any presentation platforms

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Gurugram

Work from Office

Job Responsibilities Technology Leadership Independently he/she should be able to design, implement deliver complex Data Warehousing/Data Lake, Cloud Data Management and Data Integration project assignments. Technical Design and Development Expertise in any of the following skills. Any of the ETL tools (Informatica, IICS, Matillion, Data Stage), and hosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any of the BI tools among Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Management is must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. EducationBachelor Equivalent - OtherPG Diploma in Management

Posted 1 month ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

Bangalore Rural

Work from Office

Data Governance Specialist Req number: R5477 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are looking for a motivated Data Governance Specialist ready to take us to the next level! If you are strong in creating and maintaining integrations between data systems and data governance tools and are looking for your next career move, apply now Job Description We are looking for a Data Governance Specialist to develop and maintain Collibra workflows to support data governance initiatives. This position will be Full Time and remote. What You’ll Do Develop and maintain Collibra workflows to support data governance initiatives. Create and maintain integrations between data systems and data governance tools. Write and maintain data quality rules to measure data quality. Work with vendors to troubleshoot and resolve technical issues related to workflows and integrations. Work with other teams to ensure adherence to DG policies and standards. Assist in implementing data governance initiatives around data quality, master data, and metadata management. What You'll Need Required: Experience: 5-7 years Strong programming skills. Knowledge of system integration and use of middleware solutions. Proficiency in SQL and relational databases. Understanding of data governance, including data quality, master data, and metadata management. Willingness to learn new tools and skills. Preferred Proficient with Java or Groovy. Proficient with Mulesoft or other middleware. Proficient with Collibra DIP, Collibra Data Quality, and DQLabs. Experience with AWS Redshift, and Databricks. Physical Demands Ability to safely and successfully perform the essential job functions Sedentary work that involves sitting or remaining stationary most of the time with occasional need to move around the office to attend meetings, etc. Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard, and monitor Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience

Posted 1 month ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

3.0 - 4.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level : Location:Kochi, Coimbatore, Trivandrum Must have skills:Python, Pyspark Good to have skills:Redshift Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles & Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe Professional & Technical Skills: Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 3-4 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:5-8 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

2.0 - 3.0 years

4 - 8 Lacs

Kochi

Work from Office

Job Title - Data Engineer Sr.Analyst ACS SONG Management Level:Level 10 Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Python (Programming Language), AWS Aurora, PySpark, Oracle Procedural Language Extensions to SQL (PLSQL) Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the application development process effectively.- Ensure seamless communication within the team and stakeholders. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language), AWS Aurora, PySpark, Oracle Procedural Language Extensions to SQL (PLSQL).- Strong understanding of cloud computing concepts.- Experience in designing and implementing scalable applications.- Proficient in database management and optimization.- Skilled in troubleshooting and problem-solving. Additional Information:- The candidate should have a minimum of 12 years of experience in Python (Programming Language).- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 20 Lacs

Bengaluru

Work from Office

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce . Role Description Salesforce has immediate opportunities for software developers who want their lines of code to have significant and measurable positive impact for users, the companys bottom line, and the industry. You will be working with a group of world-class engineers to build the breakthrough features our customers will love, adopt, and use while keeping our trusted CRM platform stable and scalable. The software engineer role at Salesforce encompasses architecture, design, implementation, and testing to ensure we build products right and release them with high quality. Code review, mentoring junior engineers, and providing technical guidance to the team (depending on the seniority level) We pride ourselves on writing high-quality, maintainable code that strengthens the stability of the product and makes our lives easier. We embrace the hybrid model and celebrate the individual strengths of each team member while encouraging everyone on the team to grow into the best version of themselves. We believe that autonomous teams with the freedom to make decisions will empower the individuals, the product, the company, and the customers they serve to thrive. Your Impact As a Lead Engineer, your job responsibilities will include: Build new and exciting components in an ever-growing and evolving market technology to provide scale and efficiency. Develop high-quality, production-ready code that can be used by millions of users of our applications Make design decisions on the basis of performance, scalability, and future expansion. Work in a Hybrid Engineering model and contribute to all phases of SDLC including design, implementation, code reviews, automation, and testing of the features. Build efficient components/algorithms on a microservice multi-tenant SaaS cloud environment Code review, mentoring junior engineers, and providing technical guidance to the team (depending on the seniority level) Required Skills: Mastery of multiple programming languages and platforms; 10+ years of software development experience; Deep knowledge of object-oriented programming and other scripting languages: Java, Python, Scala C#, Go, Node.JS and C++; Strong SQL skills and experience and experience with relational and non-relational databases e.g. (Postgress / Trino / redshift / Mongo). Experience with developing SAAS applications over public cloud infrastructure - AWS/Azure/GCP; Proficiency in queues, locks, scheduling, event-driven architecture, and workload distribution, along with a deep understanding of relational database and non-relational databases; A deeper understanding of software development best practices and demonstrating leadership skills; Degree or equivalent relevant experience required. Experience will be evaluated based on the core competencies for the role (e.g. extracurricular leadership roles, military experience, volunteer roles, work experience, etc.) BENEFITS PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community

Posted 1 month ago

Apply

2.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Job Title - S&C Global Network - AI - CMT DE- Consultant Management Level:9- Consultant Location:Open Must-have skills: Data Engineering Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary : We are looking for a passionate and results-driven Data Engineer to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that support data-driven decision-making across the organization. Roles & Responsibilities: Design, build, and maintain robust, scalable, and efficient data pipelines (ETL/ELT). Work with structured and unstructured data across a wide variety of data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data systems and architecture for performance, scalability, and reliability. Monitor data quality and support initiatives to ensure clean, accurate, and consistent data. Develop and maintain data models and metadata. Implement and maintain best practices in data governance, security, and compliance. Professional & Technical Skills: 2+ years in data engineering or related fields Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Strong programming skills in Python, Scala, or Java. Experience with big data technologies such as Spark, Hadoop, or Hive. Familiarity with cloud platforms like AWS, Azure, or GCP, especially services like S3, Redshift, BigQuery, or Azure Data Lake. Experience with orchestration tools like Airflow, Luigi, or similar. Solid understanding of data warehousing concepts and data modeling techniques. Good problem-solving skills and attention to detail. Experience with modern data stack tools like dbt, Snowflake, or Databricks. Knowledge of CI/CD pipelines and version control (e.g., Git). Exposure to containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation). Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred) and other AI Accenture locations. About Our Company | Accenture Qualification Experience: 4+ years Educational Qualification: Btech/ BE

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Cloud Data Architecture Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Solutions Architect - Lead, you will analyze, design, code, and test multiple components of application code. You will perform maintenance, enhancements, and/or development work, contributing to the overall success of the projects. Roles & Responsibilities:Design and develop the overall architecture of our digital data platform using AWS services.Create and maintain cloud infrastructure designs and architectural diagrams. Collaborate with stakeholders to understand business requirements and translate them into scalable AWS-based solutions. Evaluate and recommend AWS technologies, services, and tools for the platform. Ensure the scalability, performance, security, and cost-effectiveness of the AWS-based platform. Lead and mentor the technical team in implementing architectural decisions and AWS best practices. Develop and maintain architectural documentation and standards for AWS implementations. Stay current with emerging AWS technologies, services, and industry trends. Optimize existing AWS infrastructure for performance and cost. Implement and manage disaster recovery and business continuity plans. Professional & Technical Skills: Minimum 8 years of experience in IT architecture, with at least 5 years in a solutions architect role. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena).Experience in Infrastructure as Code (e.g., CloudFormation, Terraform). Exposure to Continuous Integration/Continuous Deployment (CI/CD) pipelines. Experience in Containerization technologies (e.g., Docker, Kubernetes).Proficiency in multiple programming languages and frameworks. AWS Certified Solutions Architect - Professional certification required. Additional Information:The candidate should have a minimum of 5 years of experience in solutions architect role.This position is based at our Hyderabad office.A 15 years full time education is required (Bachelor of Engineering in Electronics/Computer Science, or any related stream). Qualification 15 years full time education

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies