Jobs
Interviews

3652 Redshift Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets: AWS Data Engineer Preferred skill sets: AWS Data Engineer Years of experience required: 4-8 Education qualification: Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

The purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, Amazon Redshift, Google BigQuery. The role will be involved in design, development, troubleshooting, and issue resolution. The role involves upgrading, enhancing, and optimizing the technical solution. It involves continuous integration and continuous deployment of various requirements changes in the business logic implementation. Interactions with internal stakeholders and/or clients to explain technology solutions and a clear understanding of client’s business requirements through which to guide optimal design/solution to meet their needs. The ability to communicate to both technical and non-technical audiences is key. Job Description: Must Have Skills: Database (SQL server / Snowflake / Teradata / Redshift / Vertica / Oracle / Big query / Azure DW etc. ETL (Extract, Transform, Load) tool (Talend, Informatica, SSIS, DataStage, Matillion) Python, UNIX shell scripting, Project & resource management Workflow Orchestration (Tivoli, Tidal, Stonebranch) Client-facing skills Good to have Skills: Experience in Cloud computing (one or more of AWS, Azure, GCP) . AWS Preferred. Key responsibilities: Understanding and practical knowledge of data warehouse, data mart, data modelling, data structures, databases, and data ingestion and transformation Strong understanding of ETL processes as well as database skills and common IT offerings i.e. storage, backups and operating system. Has a strong understanding of the SQL and data base programming language Has strong knowledge of development methodologies and tools Contribute to design and oversees code reviews for compliance with development standards Designs and implements technical vision for existing clients Able to convert documented requirements into technical solutions and implement the same in given timeline with quality issues. Able to quickly identify solutions for production failures and fix them. Document project architecture, explain detailed design to team and create low level to high level design. Perform mid to complex level tasks independently. Support Client, Data Scientists and Analytical Consultants working on marketing solution. Work with cross functional internal team and external clients . Strong project Management and organization skills . Ability to lead/work 1 – 2 projects of team size 2 – 3 team members. Code management systems which include Code review and deployments Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 week ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

P3-C3-TSTS 15+ years of Senior data modeling experience, Erwin data modelling tool Experience with AWS and Redshift database models is a plus Designing L2 Layer experience for Large Data Transformation Program Experience of working as a Lead Data Modeler and working with senior stakeholders Knowledge of relational databases and data architecture computer systems, including SQL Experience of ER modeling, big data, enterprise data, and physical data models Experience of working with large data sets. Banking experience is a must. Finance data experience, Anaplan experience is a plus. Familiarity with data modeling software such as SAP PowerDesigner, Microsoft Visio, or erwin DataModeler Excellent presentation, communication, and organizational skills Strong attention to detail Ability to work in a fast-paced environment

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply

130.0 years

4 - 7 Lacs

Hyderābād

On-site

Job Description Manager, Data Visualization The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of the company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview: A unique opportunity to be part of an Insight & Analytics Data hub for a leading biopharmaceutical company and define a culture that creates a compelling customer experience. Bring your entrepreneurial curiosity and learning spirit into a career of purpose, personal growth, and leadership. We are seeking those who have a passion for using data, analytics, and insights to drive decision-making that will allow us to tackle some of the world's greatest health threats As a manager in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Our Quantitative Sciences team use big data to analyze the safety and efficacy claims of our potential medical breakthroughs. We review the quality and reliability of clinical studies using deep scientific knowledge, statistical analysis, and high-quality data to support decision-making in clinical trials. What will you do in this role: Design & develop user-centric data visualization solutions utilizing complex data sources. Identify & define key business metrics and KPIs in partnership with business stakeholders. Define & develop scalable data models in alignment & support from data engineering & IT teams. Lead UI UX workshops to develop user stories, wireframes & develop intuitive visualizations. Collaborate with data engineering, data science & IT teams to deliver business friendly dashboard & reporting solutions. Apply best practices in data visualization design & continuously improve upon intuitive user experience for business stakeholders. Provide thought leadership and data visualization best practices to the broader Data & Analytics organization. Identify opportunities to apply data visualization technologies to streamline & enhance manual / legacy reporting deliveries. Provide training & coaching to internal stakeholders to enable a self-service operating model. Co-create information governance & apply data privacy best practices to solutions. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace. What Should you have: 5 years’ relevant experience in data visualization, infographics, and interactive visual storytelling Working experience and knowledge in Power BI / QLIK / Spotfire / Tableau and other data visualization technologies Working experience and knowledge in ETL process, data modeling techniques & platforms (Alteryx, Informatica, Dataiku, etc.) Experience working with Database technologies (Redshift, Oracle, Snowflake, etc) & data processing languages (SQL, Python, R, etc.) Experience in leveraging and managing third party vendors and contractors. Self-motivation, proactivity, and ability to work independently with minimum direction. Excellent interpersonal and communication skills Excellent organizational skills, with ability to navigate a complex matrix environment and organize/prioritize work efficiently and effectively. Demonstrated ability to collaborate and lead with diverse groups of work colleagues and positively manage ambiguity. Experience in Pharma and or Biotech Industry is a plus. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Clinical Decision Support (CDS), Clinical Testing, Communication, Create User Stories, Data Visualization, Digital Transformation, Healthcare Innovation, Information Technology Operations, IT Operation, Management Process, Marketing, Motivation Management, Requirements Management, Self Motivation, Statistical Analysis, Statistics, Thought Leadership, User Experience (UX) Design Preferred Skills: Job Posting End Date: 07/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R359276

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate

Posted 1 week ago

Apply

2.0 years

4 - 7 Lacs

Hyderābād

On-site

ABOUT FLUTTER ENTERTAINMENT: Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA: Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. OVERVIEW OF THE ROLE We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision TO EXCEL IN THIS ROLE, YOU WILL NEED: 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems BENEFITS WE OFFER: Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. WHY CHOOSE US: Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India

Posted 1 week ago

Apply

5.0 years

4 - 6 Lacs

Gurgaon

On-site

With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

3.0 years

4 - 6 Lacs

Gurgaon

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

5.0 years

4 - 6 Lacs

Gurgaon

On-site

With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 years

7 - 9 Lacs

Gurgaon

On-site

With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineer, you will be the key technical expert building PepsiCo's data products to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organisation. A successful applicant will be passionate about technology and developing a deep understanding of human behaviour in the real world. They would also have excellent communication skills, be able to synthesise and present complex information and be a fast learner. You Will Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering Derive insights and putting them together to build a story to solve a given problem Suggest ways for process improvements in terms of script optimization, automating repetitive tasks Create and automate reports and dashboards through Python to track certain metrics basis given requirements Automate reports and dashboards through Python Technical Skills (Must Have) 4-year B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields 2-4 years of experience in working with data and conducting statistical and/or numerical analysis Ability to write SQL code Scripting/automation using python Hands on experience in data visualisation tool like Looker/Tableau/Quicksight Basic to advance level understanding of statistics Other Skills (Must Have) Be willing and able to quickly learn about new businesses, database technologies and analysis techniques Strong oral and written communication Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) Experience working with large datasets Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account

Posted 1 week ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About the Role: We are looking for a passionate DevOps Engineer with 1–2 years of hands-on experience in cloud infrastructure and CI/CD automation. The ideal candidate will have strong knowledge of AWS services, Infrastructure as Code (IaC), containerization, orchestration tools, and monitoring solutions. If you enjoy solving meaningful problems, collaborating with good people, and building clean, efficient infrastructure, you’ll fit right in. Key Responsibilities: Deploy, manage, and monitor applications on AWS (EC2, S3, CloudFront, ALB, Amplify, EKS, ECR, RDS, CloudWatch). Build and maintain Infrastructure as Code using Terraform for automated environment provisioning. Develop and manage CI/CD pipelines using Jenkins and GitHub Actions. Containerize applications with Docker and orchestrate them with Kubernetes (EKS). Set up and manage logging and monitoring solutions ( ELK Stack , Grafana, CloudWatch). Write shell scripts for automation and process optimization. Collaborate with developers to ensure smooth deployment and integration of applications. Skills & Qualifications: What we are looking for: 1–2 years of experience working in a DevOps or Cloud Engineer role. Strong knowledge of AWS services: EC2, S3, CloudFront, ALB, Amplify, EKS, ECR, RDS, CloudWatch. Proficient in Terraform and scripting (Shell / Bash). Hands-on experience with CI/CD tools (Jenkins, GitHub Actions). Practical experience with Docker and Kubernetes (preferably EKS). Familiarity with logging and monitoring tools (ELK Stack, Grafana). Good understanding of networking, security best practices, and Linux systems. Good to Have: Exposure to AWS Redshift, Lambda, ECS, Nginx, and AWS WAF. Basic knowledge of application performance tuning and security hardening. Soft Skills: Strong problem-solving skills and attention to detail. Ability to work in a fast-paced environment and handle multiple tasks. Good communication and collaboration skills.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Data Engineer – AWS & Financial Data Reconciliation Location: Ahmedabad (Onsite) Experience: 3+ Years Job Type: Full-Time About the Role: We are looking for a skilled Data Engineer with experience in AWS cloud platforms and strong knowledge of financial data reconciliation processes . This is a key onsite role in our Ahmedabad office , responsible for building scalable data solutions and ensuring the integrity and accuracy of financial data across multiple systems. Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines using AWS services (e.g., AWS Glue, Lambda, Redshift, S3, Athena). Reconcile large datasets from multiple financial systems, ensuring data accuracy , completeness , and auditability . Automate data validation and reconciliation processes to support finance, accounting, and compliance teams. Collaborate with cross-functional teams (finance, data analytics, business intelligence) to gather requirements and implement data solutions. Create and maintain data dictionaries , lineage documentation , and audit trails for financial datasets. Monitor pipeline performance and troubleshoot data quality or processing issues. Implement data governance best practices and support regulatory and internal compliance needs. Required Skills & Qualifications: Minimum 3 years of experience in data engineering or similar roles. Proven experience working with financial data and data reconciliation (e.g., transactional data, ledger entries, settlements, P&L, balance sheets). Strong experience with AWS services : S3, Glue, Lambda, Redshift, Athena, CloudWatch, Step Functions. Strong in SQL and scripting languages such as Python or Scala . Experience building and automating data validation and reconciliation tools or processes. Familiarity with data warehousing concepts and data lake architecture . Experience working with version control systems like Git and CI/CD pipelines. Bachelor’s degree in Computer Science, Information Systems, Finance, or related field. Nice to Have: Understanding of accounting principles and finance domain KPIs. Experience with data visualization and BI tools (e.g., Power BI, Tableau, AWS QuickSight). Knowledge of data quality frameworks , audits, and controls. Exposure to real-time data streaming platforms (e.g., Kafka, Kinesis). Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Experience - 5 to 10 years Notice Period - Immediate joiners only Location - Bangalore (weekly 2 days work from client office - Domlur) 1. Technical Skills Web & App Analytics Tools Google Analytics 4 (GA4) – Setup, event tracking, funnel analysis, attribution modeling Adobe Analytics – Workspace dashboards, segmentation, calculated metrics, processing rules Tag Management Systems Google Tag Manager (GTM) & Segment.io– Custom tags, triggers, variables, data layer usage SQL & Data Querying Data extraction and transformation Writing complex queries (CTEs, joins, aggregations, window functions) Platforms: Redshift, SQL Server Data Visualization & BI Tools Tableau, Looker, Power BI, Data Studio – Dashboard creation, data blending, storytelling with data A/B Testing & CRO Tools VWO, Google Optimize, Adobe Target – Test planning, setup, and result analysis Hotjar Scripting & Automation Excel/Sheets – Pivot tables, formulas, lookups, charts, macros 2. Analytical & Marketing Skills Attribution Modelling Last click, linear, position-based, data-driven models Multi-touch attribution and funnel drop-off analysis Customer Journey Mapping Path analysis, event funnels, retention analysis Marketing Performance Analytics Campaign ROI tracking (across paid, organic, affiliate) UTM parameter strategy and traffic source analysis Segmentation & Cohort Analysis Behavioral cohorts, lifecycle stages Conversion Rate Optimization (CRO) Hypothesis formulation, testing plan, post-test analysis Data Quality & Governance Data layer validation, debugging tools, ensuring tracking integrity 3. Soft Skills & Strategic Thinking Problem Solving & Insight Generation Translating business questions into analysis Providing actionable insights from data Communication & Storytelling Presenting data to non-technical stakeholders Creating executive-ready dashboards and narratives Stakeholder Management Cross-functional collaboration with marketing, product, dev, and leadership teams Project Management Scoping analytics implementations, prioritizing work, agile reporting cycles Attention to Detail Precision in data validation, anomaly detection, and impact analysis Nice-to-Have Skills Knowledge of JavaScript and DOM for debugging Familiarity with server-side tracking and first-party data strategy Experience with Privacy & Compliance (GDPR, CCPA) Familiarity with eCommerce user journey

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Calfus At Calfus, we are known for delivering cutting-edge AI agents and products that transform businesses in ways previously unimaginable. We empower companies to harness the full potential of AI, unlocking opportunities they never imagined possible before the AI era. Our software engineering teams are highly valued by customers, whether start-ups or established enterprises, because we consistently deliver solutions that drive revenue growth. Our ERP solution teams have successfully implemented cloud solutions and developed tools that seamlessly integrate with ERP systems, reducing manual work so teams can focus on high-impact tasks. None of this would be possible without talent like you! Our global teams thrive on collaboration, and we’re actively looking for skilled professionals to strengthen our in-house expertise and help us deliver exceptional AI, software engineering, and solutions using enterprise applications. As one of the fastest-growing companies in our industry, we take pride in fostering a culture of innovation where new ideas are always welcomed—without hesitation. We are driven and expect the same dedication from our team members. Our speed, agility, and dedication set us apart, and we perform best when surrounded by high-energy, driven individuals. To continue our rapid growth and deliver an even greater impact, we invite you to apply for our open positions and become part of our journey! About the role: As a Data Engineer – BI Analytics & DWH , you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower our organization to make data-driven decisions. You will leverage your expertise in Power BI, Tableau, and ETL processes to create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. What You’ll Do: ETL/ELT Development: Design, build, and maintain efficient ETL and ELT processes using tools such as Azure data factory, Databricks, or similar orchestration frameworks. Ensure reliable data ingestion from various sources into centralized storage systems (DWH/data lakes). Data Modelling & Warehousing: Design relational and dimensional data schemas tailored to business use cases in data lakes or traditional data warehouses (e.g., Snowflake, Redshift, Postgres). Develop and maintain data models optimized for analytics and reporting. Database Engineering: Write efficient SQL queries and stored procedures to transform, clean, and aggregate data. Manage data transformations and complex joins to support analytical workloads. BI & Visualization Support: Provide basic support for report and dashboard development in visualization tools such as Tableau or Power BI. Collaborate with analysts and business users to understand reporting needs and enable self-service BI. Performance & Data Quality: Monitor and troubleshoot data pipelines and warehouse jobs to ensure timely and accurate data availability. Apply basic data quality checks and validations to ensure trustworthiness of data outputs. On your first day, we'll expect you to have: Bachelor’s degree in computer science, Information Systems, Data Engineering, or a related field. 3–5 years of experience in data engineering with hands-on ETL/ELT development and data modelling. Solid SQL skills and experience with database systems such as SQL Server, Postgres, Snowflake, or Redshift. Exposure to cloud-based services and data tools (e.g., AWS S3, Azure Data Factory, Databricks, Lambda functions). Basic understanding of data serialization formats like JSON, Parquet, or CSV. Familiarity with Python for scripting and data manipulation. Bonus: Exposure to visualization tools such as Power BI, Tableau for dashboard/report creation will be a plus. We'd be super excited if you have: Azure SDK Ability to interact with REST API’s and perform web scraping tasks. Benefits: At Calfus, we value our employees and offer a strong benefits package. This includes medical, group, and parental insurance, coupled with gratuity and provident fund options. Further, we support employee wellness and provide birthday leave as a valued benefit. Calfus is an Equal Opportunity Employer. We believe diversity drives innovation. We’re committed to creating an inclusive workplace where everyone—regardless of background, identity, or experience—has the opportunity to thrive. We welcome all applicants!

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

You should have experience in designing and building serverless data lake solutions using a layered components architecture. This includes expertise in Ingestion, Storage, processing, Security & Governance, Data cataloguing & Search, and Consumption layer. Proficiency in AWS serverless technologies like Lake formation, Glue, Glue Python, Glue Workflows, Step Functions, S3, Redshift, Quick sight, Athena, AWS Lambda, and Kinesis is required. It is essential to have hands-on experience with Glue. You must be skilled in designing, building, orchestrating, and deploying multi-step data processing pipelines using Python and Java. Experience in managing source data access security, configuring authentication and authorization, and enforcing data policies and standards is also necessary. Additionally, familiarity with AWS Environment setup and configuration is expected. A minimum of 6 years of relevant experience with at least 3 years in building solutions using AWS is mandatory. The ability to work under pressure and a commitment to meet customer expectations are essential traits for this role. If you meet these requirements and are ready to take on this challenging opportunity, please reach out to hr@Stanratech.com.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

navi mumbai, maharashtra

On-site

Seekify Global is looking for an experienced and driven Data Catalog Engineer to join the Data Engineering team. The ideal candidate should have a strong background in designing and implementing metadata and data catalog solutions specifically in AWS-centric data lake and data warehouse environments. As a Data Catalog Engineer, you will play a crucial role in improving data discoverability, governance, and lineage across the organization's data assets. Your key responsibilities will include leading the end-to-end implementation of a data cataloging solution within AWS, establishing and managing metadata frameworks for diverse data assets, integrating the data catalog with AWS-based storage solutions, collaborating with various project teams to define metadata standards and processes, developing automation scripts for metadata management, working closely with other data professionals to ensure data accuracy, and implementing access controls to comply with data privacy standards. The ideal candidate should possess at least 7-8 years of experience in data engineering or metadata management roles, with proven expertise in implementing data catalog solutions within AWS environments. Strong knowledge of AWS services such as Glue, S3, Athena, Redshift, EMR, Data Catalog, and Lake Formation is essential. Proficiency in Python, SQL, and automation scripting for metadata pipelines is required, along with familiarity with data governance and compliance standards. Experience with BI tools and third-party catalog tools is a plus. Preferred qualifications include AWS certifications, experience with data catalog tools like Alation, Collibra, or Informatica EDC, exposure to data quality frameworks, stewardship practices, and knowledge of data migration processes. This is a full-time position that requires in-person work.,

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Staff Data Engineer Location: Gachibowli Hyderabad, TG, IN Company: Goodyear Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate Job Segment: Test Engineer, R&D Engineer, Software Engineer, Cloud, Computer Science, Engineering, Technology

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra

Remote

R022093 Pune, Maharashtra, India IT Operations Regular Location: India, Remote This is a remote position, so you’ll be working remotely from your home. You may occasionally visit a GoDaddy office to meet with your team for events or meetings. Join Our Team... Demonstrate your passion for helping small businesses achieve their dreams online. By helping to move strategy into action, you will be improving GoDaddy’s outreach to those small business owners whose dreams are the backbone of our company. Take part within a multichannel environment, turning strategic plans into digital marketing campaigns and ultimately influencing our customers’ success! The Marketing Data Analyst will bring to bear their experience and knowledge of marketing data to deliver timely and relevant omni channel marketing experiences to our customers worldwide. Your experience understanding and working with marketing data will be applying a robust marketing technology platform to drive campaign automation and optimization.This will ensure continuous improvement in scaling operations for our customer marketing programs, including Email, SMS, WhatsApp, and new & emerging channels What you'll get to do... Serve as Marketing Data subject matter expert for the Customer Marketing team with extensive knowledge of data including standard methodologies and anti-patterns Play an active role in driving requirements for the implementation and integration of an evolving exceptional marketing automation platform Craft and develop customer segments to be applied over Email, Web, CRM, SMS, WhatsApp, and many other customer touch points Collaborate with cross functional teams in the creation of segmentation and personalisation-based strategies Ongoing analysis of marketing programs and broader business performance to surface key insights and recommendations to help inform our marketing strategy Ensure the accuracy of our outbound marketing campaigns by driving QA and on-going monitoring at all levels all the way up to source data Your experience should include... 4+ years of experience in marketing data management, specialising in data set development for marketing automation and email marketing Minimum 4 years of experience working with SQL syntax, relational and non-relational database models, OLAP, and data driven marketing platforms with proven experience writing and understanding complex queries Expertise in testing/optimization methodologies, performance tuning for self-work and reviews with strong analytical and data presentation abilities Experience collaborating with the MarTech Platform Team, Data Platform, and Marketing Managers to present findings, quickly diagnose and troubleshoot emergent issues Experience in segmentation tools like Message Gears, SQL Server, and AWS database systems such as Redshift, Athena is highly preferred Experience with Data Visualisation tools like Tableau and/or Quick-Sight is preferred You might also have... Four-year bachelor’s degree required; master’s degree is preferred Hands on skills in Python and experience with an enterprise level Marketing Automation platform such as Salesforce Marketing Cloud is preferred Experience working with B2B and B2C data including lead and prospect management is nice to have We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process. We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way. About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us. At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page. GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy. Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com. GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies