Home
Jobs

1766 Redshift Jobs - Page 15

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Advanced Analytics Team The central Advanced Analytics team at the Abbott Established Pharma Division’s (EPD) headquarters in Basel helps define and lead the transformation towards becoming a global, data-driven company with the help of data and advanced technologies (e.g., Machine Learning, Deep Learning, Generative AI, Computer Vision). To us, Advanced Analytics is an important lever to reach our business targets, now and in the future; It helps differentiate ourselves from our competition and ensure sustainable revenue growth at optimal margins. Hence the central AA team is an integral part of the Strategy Management Office at EPD that has a very close link and regular interactions with the EPD Senior Leadership Team. Primary Job Function With the above requirements in mind, EPD is looking to fill a role of a Cloud Engineer reporting to the Head of AA Product Development. The Cloud Engineer will be responsible for developing applications leveraging AWS services. This role involves leading cloud initiatives, ensuring robust cloud infrastructure, and driving innovation in cloud technologies to support the business's advanced analytics needs. Core Job Responsibilities Support the development and maintenance of company-wide frameworks and libraries that enable faster, better, and more informed decision-making within the business, creating significant business value from data & analytics. Ensure data availability and accessibility for prioritized Advanced Analytics scope, and maintain stable, scalable, and modular data science pipelines from data exploration to deployment. Acquire, ingest, and process data from multiple sources and systems into our cloud platform (AWS), ensuring data integrity and security. Collaborate with data scientists to map data fields to hypotheses, and curate, wrangle, and prepare data for advanced analytical models. Implement and manage robust security measures to ensure compliant handling and management of data, including access strategies aligned with Information Security, Cyber Security, and Data Privacy principles. Develop and deploy smart automation tools based on cloud technologies, aligned with business priorities and needs. Oversee the timely delivery of Advanced Analytics solutions in coordination with the rest of the team and per requirements and timelines, ensuring alignment with business goals. Collaborate closely with the Data Science team and AI Engineers to understand platform needs and lead the development of solutions that support their work. Troubleshoot and resolve issues related to the AWS platform, ensuring minimal downtime and optimal performance. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Drive continuous improvement of the AWS Cloud platform by contributing and implementing new ideas and processes. Supervisory/Management Responsibilities Direct Reports: None. Indirect Reports: None. Position Accountability/Scope The Cloud Engineer is accountable for delivering targeted business impact per initiative in collaboration with key stakeholders. This role involves significant responsibility for the architecture and management of Abbott's strategic cloud platforms and AI/AA programs, enabling faster, better, and more informed decision-making within the business. Minimum Education Master in relevant field (e.g., computer science, electrical engineering) Minimum Experience/Training Required At least 3-5 years of relevant experience, with a strong track record in building solutions/applications using AWS services Proven ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets. Proficiency in multiple programming languages – Javascript, Python, Scala, PySpark or Java. Extensive knowledge and experience with various database technologies, including distributed processing frameworks, relational databases, MPP databases, and NoSQL data stores. Deep understanding of Information Security principles to ensure compliant handling and management of data. Significant experience with cloud platforms, preferably AWS and its ecosystem. Advanced knowledge of development in CICD (Continuous Integration and Continuous Delivery) environments. Strong background in data warehousing / ETL tools. Proficiency in DevOps practices and tools such as Jenkins, Terraform, etc. Proficiency in serverless architecture and services like AWS Lambda. Understanding of security best practices and implementation in cloud environments. Ability to understand business objectives and create cloud-based solutions to meet those objectives. Result-driven, analytical, and creative thinker. Proven ability to work with cross-functional teams and bridge the gap between business and data science. Fluency in English is a must; additional languages are a plus. Additional Technical Skills Experience with front-end frameworks preferably React JS. Knowledge of back-end frameworks like Django, Flask, or Node.js. Familiarity with database technologies such as RedShift, MySQL, or DynamoDB. Understanding of RESTful API design and development. Experience with version control systems like CodeCommit. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. We are looking for an experienced Data ETL Developer / BI Engineer who loves solving complex problems across a full spectrum of data & technologies. You will lead the building effort of GBT's new BI platform and manage the legacy platform to seamlessly support our business function around data and analytics. You will create dashboards, databases, and other platforms that allow for the efficient collection and evaluation of BI data. What You’ll Do on a Typical Day: Design, implement, and maintain systems that collect and analyze business intelligence data. Design and architect an analytical data store or cluster for the enterprise and implement data pipelines that extract, transform, and load data into an information product that helps the organization reach strategic goals. Create physical and logical data models to store and share data that can be easily consumed for different BI needs. Develop Tableau dashboards and features. Create scalable and high-performance data load and management process to make data available near real-time to support on-demand analytics and insights. Translate complex technical and functional requirements into detailed designs. Investigate and analyze alternative solutions to data storing, processing, etc., to ensure the most streamlined approaches are implemented. Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs Design & develop, and maintain a data model implementing ETL processes. Manage and maintain the database, warehouse, & cluster with other dependent infrastructure. Work closely with data, products, and another team to implement data analytic solutions. Support production application and Incident management. Help define data governance policies and support data versioning processes Maintain security and data privacy by working closely with the Data Protection Officer internally. Analyze a vast number of data stores and uncover insights What We’re Looking For: Degree in computer sciences or engineering Overall, 3-5 years of experience in data & data warehouse, ETL, and data modeling. 2+ years of experience working and managing large data stores, complex data pipelines, and BI solutions. Strong experience in SQL and writing complex queries. Hands-on experience with Tableau development. Hands-on working experience on Redshift, data modeling, data warehouse, ETL tool, Python, and Shell scripting. Understanding of data warehousing and data modeling techniques Strong data engineering skills on the AWS Cloud Platform are essential. Knowledge of Linux, SQL, and any scripting language Good interpersonal skills and a positive attitude Experience in travel data would be a plus. Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position: Data Architect Skills: GCP, DA, Development, SQL, Python, Big Query, Dataproc, Dataflow, Data Pipelines. Exp: 10+ Yrs Roles and Responsibilities • 10+ years of relevant work experience, including previous experience leading Data related projects in the field of Reporting and Analytics. • Design, build & maintain scalable data lake and data warehouse in cloud ( GCP ) • Expertise in gathering business requirements, analysing business needs, defining the BI/DW architecture to support and help deliver technical solutions to complex business and technical requirements • Creating solution prototype and participating in technology selection. Perform POC and technical presentations • Architect, develop and test scalable data warehouses and data pipelines architecture in Cloud Technologies ( GCP ) Experience in SQL and No SQL DBMS like MS SQL Server, MySQL, PostgreSQL, DynamoDB, Cassandra, MongoDB. • Design and develop scalable ETL processes, including error handling. • Expert in Query and program languages MS SQL Server, T-SQL, PostgreSQL, MY SQL, Python, R. • Preparing data structures for advanced analytics and self-service reporting using MS SQL, SSIS, SSRS • Write scripts for stored procedures, database snapshots backups and data archiving. • Experience with any of these cloud-based technologies: o PowerBI/Tableau, Azure Data Factory, Azure Synapse, Azure Data Lake o AWS RedShift, Glue, Athena, AWS Quicksight o Google Cloud Platform Good to have: • Agile development environment pairing DevOps with CI/CD pipelines • AI/ML background Interested candidates share cv to dikshith.nalapatla@motivitylabs.com Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

🔥 Sr Java Engineer - WFH/Remote This is full remote working opportunity . If you are interested and fulfill the below mentioned criteria then send me the following details 1. Email id 2. Years of Relevant experience 3. CCTC, ECTC 4. Notice period Must haves 5+ years of experience in web development in similar environments; Bachelor’s degree in Computer Science, Information Security, or a related technology field; Strong knowledge of Java 8 and 17, Spring, and Spring Boot; Experience with microservices and events; Great experience and passion for creating documentation for code and business processes; Expertise in architectural design and code review, with a strong grasp of SOLID principles; Skilled in gathering and analyzing complex requirements and business processes; Contribute to the development of our internal tools and reusable architecture; Experience creating optimized code and performance improvement for production systems and applications; Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; Familiarity with unit and system testing frameworks (e.g., JUnit, Mockito); Proficient in using Git; Dedicated: own the apps you and your team are developing and take quality very seriously; Problem Solving: proactively solve problems before they can become real problems; Constantly upgrading your skill set and applying those practices; Upper-Intermediate English level. Nice to haves Experience with Test Driven Development; Experience with logistics software (delivery, transportation, route planning), RSA domain; Experience with AWS, like ECS, SNS, SQS, and RedShift. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About the Team Data is at the foundation of DoorDash success. The Data Engineering team builds database solutions for various use cases including reporting, product analytics, marketing optimization and financial reporting. By implementing pipelines, data structures, and data warehouse architectures; this team serves as the foundation for decision-making at DoorDash. About the Role DoorDash is looking for a Senior Data Engineer to be a technical powerhouse to help us scale our data infrastructure, automation and tools to meet growing business needs. You're excited about this opportunity because you will… Work with business partners and stakeholders to understand data requirements Work with engineering, product teams and 3rd parties to collect required data Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse Develop and implement data quality checks, conduct QA and implement monitoring routines Improve the reliability and scalability of our ETL processes Manage a portfolio of data products that deliver high-quality, trustworthy data Help onboard and support other engineers as they join the team We're excited about you because… 5+ years of professional experience 3+ years experience working in data engineering, business intelligence, or a similar role Proficiency in programming languages such as Python/Java 3+ years of experience in ETL orchestration and workflow management tools like Airflow, Flink, Oozie and Azkaban using AWS/GCP Expert in Database fundamentals, SQL and distributed computing 3+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink. Experience working with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms Excellent communication skills and experience working with technical and non-technical teams Knowledge of reporting tools such as Tableau, Superset and Looker Comfortable working in fast paced environment, self starter and self organizing Ability to think strategically, analyze and interpret market and consumer information You must be located near one of our engineering hubs indicated above Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 4 years Location: Ahmedabad JobType: full-time We are seeking a highly skilled Senior Database Administrator with 5-8 years of experience in data engineering and database management. The ideal candidate will have a strong foundation in data architecture, modeling, and pipeline orchestration. Hands-on experience with modern database technologies and exposure to generative AI tools in production environments will be a significant advantage. This role involves leading efforts to streamline data workflows, improve automation, and deliver high-impact insights across the organization. Requirements Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines (ETL/ELT) across multiple database systems. Architect and maintain high-availability, secure, and scalable data storage solutions. Utilize generative AI tools to automate data workflows and enhance system capabilities. Collaborate with engineering, analytics, and data science teams to fulfill data requirements and optimize data delivery. Implement and monitor data quality standards, governance practices, and compliance protocols. Document data architectures, systems, and processes for transparency and maintainability. Apply data modeling best practices to support optimal storage and querying performance. Continuously research and integrate emerging technologies to advance the data infrastructure. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5-8 years of experience in database administration and data engineering for large-scale systems. Proven experience in designing and managing relational and non-relational databases. Mandatory Skills: SQL - Proficient in advanced queries, performance tuning, and database management. NoSQL - Experience with at least one NoSQL database such as MongoDB, Cassandra, or CosmosDB. Hands-on experience with at least one of the following cloud data warehouses: Snowflake, Redshift, BigQuery, or Microsoft Fabric. Cloud expertise - Strong experience with Azure and its data services. Working knowledge of Python for scripting and data processing (e.g., Pandas, PySpark). Experience with ETL tools such as Apache Airflow, Microsoft Fabric, Informatica, or Talend. Familiarity with generative AI tools and their integration into data pipelines. Preferred Skills & Competencies: Deep understanding of database performance, tuning, backup, recovery, and security. Strong knowledge of data governance, data quality management, and metadata handling. Experience with Git or other version control systems. Familiarity with AI/ML-driven data solutions is a plus. Excellent problem-solving skills and the ability to resolve complex database issues. Strong communication skills to collaborate with cross-functional teams and stakeholders. Demonstrated ability to manage projects and mentor junior team members. Passion for staying updated with the latest trends and best practices in database and data engineering technologies. Show more Show less

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Linkedin logo

Title – Product Manager Experience: 4 to 8 years Skills required: Business Analysis, Managed products developed using AWS services like S3, API Gateway, Lambda, DynamoDB, Redshift, Stakeholder Management, Backlog Prioritization Job Description The Area: Data Lake is a smart object store on AWS that allows storage and access. The files in Data Lake are stored in raw or unstructured format, as compared to a structured DB, and are accessible to run a variety of analytics as needed on the available data. As a roadmap, Data Lake would be used across Morningstar to store and access all their structured and unstructured data across the teams to make it a single source of information. The Role: We are looking for an experienced, enthusiastic, results-driven individual to help advance our offerings to Morningstar's internal users. The ideal candidate will deeply understand the financial markets and financial data. The candidate should have worked extensively on developing new products and digital propositions from concept through to launch. This business visionary will work with internal partners in Product, Research, and Investment Management to drive innovation in our product offerings. This position is based in our Mumbai office. Responsibilities: Work within an Agile software development framework, develop business requirements and user stories refined and validated with customers and stakeholders, prioritize the backlog queue across multiple projects and workstreams and ensure high-quality execution working with development and business analyst squad members. Work with external and internal project stakeholders to define and document project scope, plan product phases/ versions, Minimum Viable Product, and overall product deliveries Work with other product and capability owners from across the organization to develop a product integration vision that supports and advances their business goals Work with cross-functional leaders to determine technology, design, and project management resource requirements to execute and deliver on commitments. Proactively communicate project delivery risks to key stakeholders to ensure timely deliverables Own the tactical roadmap, requirements, and product development lifecycle for a squad to deliver high-performing Enterprise Components to our end clients Understand business, operations, and technology requirements, serving as a conduit between stakeholders, operations, and technology teams Defines and tracks key performance indicators (KPIs) and measurements of product success. Requirements: Candidates must have a minimum of a bachelor`s degree with excellent academic credentials. MBA highly desired. At least five years of business experience in the financial services industry. Candidates must have domain expertise, particularly in developing products using the AWS platform. Superior business judgment; analytical, planning, and decision-making skills; in addition, to exemplary communication and presentation abilities. An action-oriented individual possessing an entrepreneurial mindset. Demonstrated ability to lead and build the capabilities of a driven and diverse team. Able to thrive in a fast-paced work environment, exhibit a passion for innovation, and harbor a genuine belief in, and acceptance of Morningstar`s core values. Ability to develop strong internal and external partnerships; and work effectively across different business and functional areas. AWS Certification is a big plus Morningstar is an equal-opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Roles and Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines to support ETL (Extract, Transform, Load) processes using tools like Apache Airflow, AWS Glue, or similar. Database Management: Design, optimize, and manage relational and NoSQL databases (such as MySQL, PostgreSQL, MongoDB, or Cassandra) to ensure high performance and scalability. SQL Development: Write advanced SQL queries, stored procedures, and functions to extract, transform, and analyze large datasets efficiently. Cloud Integration: Implement and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud, utilizing services like Redshift, BigQuery, or Snowflake. Data Warehousing: Contribute to the design and maintenance of data warehouses and data lakes to support analytics and BI requirements. Programming and Automation: Develop scripts and applications in Python or other programming languages to automate data processing tasks. Data Governance: Implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions. Performance Optimization: Identify and resolve performance bottlenecks in data systems and optimize data storage and retrieval. Documentation: Maintain comprehensive documentation for data processes, pipelines, and infrastructure. Stay Current: Keep up-to-date with the latest trends and advancements in data engineering, big data technologies, and cloud services. Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field. Technical Skills: Proficiency in SQL and relational databases (PostgreSQL, MySQL, etc.). Experience with NoSQL databases (MongoDB, Cassandra, etc.). Strong programming skills in Python; familiarity with Java or Scala is a plus. Experience with data pipeline tools (Apache Airflow, Luigi, or similar). Expertise in cloud platforms (AWS, Azure, or Google Cloud) and data services (Redshift, Big Query, Snowflake). Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Data Modeling: Experience in designing and maintaining data models for relational and non-relational databases. Analytical Skills: Strong analytical and problem-solving abilities with a focus on performance optimization and scalability. Soft Skills: Excellent verbal and written communication skills to convey technical concepts to non-technical stakeholders. Ability to work collaboratively in cross-functional teams. Certifications (Preferred): AWS Certified Data Analytics, Google Professional Data Engineer, or similar. Mindset: Eagerness to learn new technologies and adapt quickly in a fast-paced environment. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. Ready to explore a career path? Start your journey. What You’ll Do on a Typical Day Manage a team of developers Ensure every individual contributes, learns and grows in their role, knowledge and skills Ensure the team works efficiently in an enjoyable environment Evaluate and optimize the time and resources needed to carry out the various stages of the projects in order to establish an overall development plan Negotiates resources (human, technical, financial, deadlines) according to the progress of projects and adjusts resources if necessary Measure risks that may arise during the implementation and development phases Setup the necessary interfaces between the departments concerned Follow-up progress and deadlines for the various tasks with the project collaborators Guarantees the quality of the product / code base What We’re Looking For 10+ years of proficient experience in C# or python development, as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift At least 3+ years’ experience in a similar R&D Manager position Engineering school or master's degree in computer science First experience in team management (also remote or offshore) Passion for excellence in programming Software Craftmanship / Clean code centered Fluency in English (multicultural and international team) What Technical Skills You’ll Develop C# .NET and/or Python Oracle, PostgreSQL AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker, Ansible Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and more. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with global tuition assistance, access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Diversity, Equity, and Inclusion in every aspect of our business at GBT. You can connect with colleagues through our global Inclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. Wellbeing resources to support mental and emotional health for you and your immediate family. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. Create your journey at Amex GBT! As a Neo Software Engineer, you’ll join our highly skilled R&D Neo team ! Neo is an online booking tool (OBT) as well as an expense tool for business travel through a SaaS deployment. It allows a user to search and book a business trip in a user friendly and high-performance web application. It gives him access to different transport modes and travel suppliers within the travel policy set by his company. We’re excited for you to experience our values (People, Passion, and Progress) in action, and look forward to your application. What You’ll Do on a Typical Day Work in a SCRUM team Design, develop and test new applications and features Participate in the evolution and maintenance of existing systems Contribute in the deployment of features Monitor the platform Propose new ideas to enhance the product either functionally or technically What We’re Looking For Operational knowledge of C# or python development, as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills Passion, Software Craftsmanship, culture of excellence, Clean Code Fluency in English (multicultural and international team) What Technical Skills You’ll Develop C# .NET and/or Python Oracle, PostgreSQL AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker, Ansible #GBTJobs Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As a Software Developer you will work in a constantly evolving environment, due to technological advances and the strategic direction of the organization you work for. You will create, maintain, audit, and improve systems to meet particular needs, often as advised by a systems analyst or architect, testing both hard and software systems to diagnose and resolve system faults. The role also covers writing diagnostic programs and designing and writing code for operating systems and software to ensure efficiency. When required, you will make recommendations for future developments Benefits of Joining Us Challenging Projects : Work on cutting-edge projects and solve complex technical problems. Career Growth : Advance your career quickly and take on leadership roles. Mentorship : Learn from experienced mentors and industry experts. Global Opportunities : Work with clients from around the world and gain international experience. Competitive Compensation : Receive attractive compensation packages and benefits. If you're passionate about technology and want to work on challenging projects with a talented team, becoming an Infosys Power Programmer could be a great career choice. Mandatory Skills AWS Glue, AWS Redshift/Spectrum, S3, API Gateway, Athena, Step and Lambda functions Experience in Extract Transform Load (ETL) and Extract Load & Transform (ELT) data integration pattern. Experience in designing and building data pipelines. Development experience in one or more object-oriented programming languages, preferably Python Job Specs 5+ years of in depth hands on experience of developing, testing, deployment and debugging of Spark Jobs using Scala in Hadoop Platform In depth knowledge of Spark Core, working with RDDs, Spark SQL In depth knowledge on Spark Optimization Techniques and Best practices Good Knowledge of Scala Functional Programming: Try, Option, Future, Collections Good Knowledge of Scala OOPS: Classes, Traits and Objects (Singleton and Companion), Case Classes Good Understanding of Scala Language Features: Type System, Implicit/Givens Hands on experience of working in Hadoop Environment (HDFS/Hive), AWS S3, EMR Python programming skills Working experience on Workflow Orchestration tools like Airflow, Oozie Working with API calls in Scala Understanding and exposure to file formats such as Apache AVRO, Parquet, JSON Good to have knowledge of Protocol Buffers and Geospatial data analytics. Writing Test cases using frameworks such as scalatest. Good Knowledge of Build Tools such as: Gradle & SBT in depth Experience on using GIT, resolving conflicts, working with branches. Good to have worked on some workflow systems as Airflow Strong programming skills using data structures and algorithms. Excellent analytical skills Good communication skills Qualification 7-10 Yrs in the industry BE/B.tech CS or equivalent Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company dedicated to providing customized, affordable solutions to protect businesses and organizations from cyber threats. Our services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We take a proactive approach to security, continuously monitoring and testing our clients' digital environments to identify vulnerabilities before they can be exploited. Role Description We are looking for a freelance Data Engineer with strong experience in PySpark and AWS data services, particularly S3 and Redshift . The ideal candidate will also have some familiarity with integrating or handling data from Salesforce . This role focuses on building scalable data pipelines, transforming large datasets, and enabling efficient data analytics and reporting. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark for large-scale data processing. Manage data ingestion, storage, and transformation across AWS S3 and Redshift . Design data flows and schemas to support reporting, analytics, and business intelligence needs. Perform incremental loads, partitioning, and performance tuning in distributed environments. Extract and integrate relevant datasets from Salesforce for downstream processing. Ensure data quality, consistency, and availability for analytics teams. Collaborate with data analysts, platform engineers, and business stakeholders. Required Skills: Strong hands-on experience with PySpark for large-scale distributed data processing. Proven track record working with AWS S3 (data lake) and Amazon Redshift (data warehouse). Ability to write complex SQL queries for transformation and reporting. Basic understanding or experience integrating data from Salesforce (APIs or exports). Experience with performance optimization, partitioning strategies, and efficient schema design. Knowledge of version control and collaborative development tools (e.g., Git). Nice to Have: Experience with AWS Glue or Lambda for orchestration. Familiarity with Salesforce objects, SOQL, or ETL tools like Talend, Informatica, or Airflow. Understanding of data governance and security best practices in cloud environments. Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Engineer – AWS Full Stack Location: India (Remote or Hybrid) Contract Type: Full-time, 1-Year Contract Experience Required: Minimum 5 years Start Date: Immediate Compensation: Competitive (Based on experience) About the Role We are seeking a highly skilled Data Engineer with deep expertise in the AWS ecosystem and full-stack data engineering . The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and analytics platforms that support critical business insights and decision-making. This is a 1-year contract role ideal for professionals who have experience across data ingestion, transformation, cloud infrastructure, and data operations. Key Responsibilities Design and build end-to-end data pipelines using AWS services (Glue, Lambda, S3, Athena, Redshift, EMR, etc.). Develop and manage ETL/ELT processes , ensuring data quality, scalability, and maintainability. Collaborate with product, analytics, and engineering teams to deliver data models, APIs, and real-time data solutions . Implement best practices for data governance, lineage, monitoring, and access control . Automate data workflows using tools like Airflow, Step Functions , or custom scripts. Create and maintain infrastructure as code (IaC) using CloudFormation or Terraform for AWS data components. Optimize data warehouse and lakehouse architectures for performance and cost. Required Skills & Qualifications 5+ years of experience in data engineering, including cloud-native data development. Strong expertise in AWS data services : Glue, S3, Lambda, Redshift, Athena, Kinesis, EMR, etc. Proficiency in SQL, Python, and Spark for data manipulation and transformation. Experience with DevOps tools (CI/CD, Git, Docker) and infrastructure automation. Knowledge of data modeling , schema design, and performance tuning for large-scale datasets. Ability to work independently in a contract environment , managing priorities and deadlines. Preferred Qualifications Familiarity with streaming data architectures using Kafka/Kinesis. Experience working in regulated or large-scale enterprise environments . Exposure to BI tools (e.g., QuickSight, Tableau, Power BI) and API integration for downstream consumption. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kerala, India

Remote

Linkedin logo

Hiring: Senior Data Analyst (5+ Years) – Remote/Kochi/Trivandrum/Bangalore/Chennai 📍 Location: Remote / Kochi / Trivandrum 💰 Budget: Up to 19 LPA 📆 Immediate Joiners Preferred 🚀 About the Role: We’re looking for a Senior Data Analyst to join our Data & Analytics team! You’ll transform complex data into actionable insights, drive strategic decisions, and empower stakeholders with intuitive dashboards and reports. If you love digging into data, solving business problems, and communicating insights effectively, this role is for you! 🔧 Mandatory Key Skills Required: 5 years mandatory ✔ SQL (Advanced) ✔ Power BI (Dashboarding & Visualization) ✔ Python (Data Analysis) ✔ Amazon Athena (or similar cloud data tools) ✔ 5+ years in Data Analysis/Business Intelligence Job Description / Duties & Responsibilities • Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. • Analyze large datasets to uncover trends, patterns, and actionable insights. • Design and build dashboards and reports using Power BI. • Perform ad-hoc analysis and develop data-driven narratives to support decision-making. • Ensure data accuracy, consistency, and integrity through data validation and quality checks. • Build and maintain SQL queries, views, and data models for reporting purposes. • Communicate findings clearly through presentations, visualizations, and written summaries. • Partner with data engineers and architects to improve data pipelines and architecture. • Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies • Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. • 5+ years of experience in a data analyst or business intelligence role. • Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). • Hands-on experience in Power BI. • Proficiency in Python, Excel and data storytelling. • Understanding of data modelling, ETL concepts, and basic data architecture. • Strong analytical thinking and problem-solving skills. • Excellent communication and stakeholder management skills • To adhere to the Information Security Management policies and procedures. Soft Skills Required Must be a good team player with good communication skills Must have good presentation skills Must be a pro-active problem solver and a leader by self Manage & nurture a team of data engineers Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

Linkedin logo

Role Open Positions Mandatory Skillset Experience Work Location NP Budget Data Analyst 1 SQL, Power BI, Python, Amazon Athena 5+ Years TVM/Kochi/Remote Immediate only Max 19 LPA Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Skills: data storytelling,sql,athena,power bi,excel,python,aws,amazon athena Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and innovative Data Scientist to join our team and drive data-centric initiatives by leveraging AI/ML models , Big Data technologies , and cloud platforms like AWS . The ideal candidate will be proficient in Python , experienced in designing end-to-end machine learning pipelines, and comfortable working with large-scale data systems. Key Responsibilities: Design, develop, and deploy machine learning models and AI-based solutions for business problems. Build robust ETL pipelines to process structured and unstructured data using tools like PySpark , Airflow , or Glue . Work with AWS cloud services (e.g., S3, Lambda, SageMaker, Redshift, EMR) to build scalable data science solutions. Perform exploratory data analysis (EDA) and statistical modeling to uncover actionable insights. Collaborate with data engineers, product managers, and stakeholders to identify use cases and deliver impactful data-driven solutions. Optimize model performance and ensure model explainability, fairness, and reproducibility. Maintain and improve existing data science solutions through MLOps practices (e.g., model monitoring, retraining, CI/CD for ML). Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Data Science, or related field. 3+ years of experience in data science or machine learning roles. Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch . Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting. The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently. This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability. Key Responsibilities: ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses. Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes. Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues. File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data. Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency. Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health. Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools. Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes. Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data. Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs. Skills Required: Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes. Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning. Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies. Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e.g., Apache Airflow, Luigi, or Prefect). SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses. Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems. Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies. Nice to Have: Experience with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Knowledge of message queues (Kafka, RabbitMQ) for data streaming. Familiarity with containerization tools (Docker, Kubernetes) for deployment. Exposure to infrastructure automation tools (Terraform, Ansible). Qualifications: Bachelor’s degree in Computer Science, Data Engineering, or a related field. 4+ years of experience in ETL development, data pipeline management, or backend data engineering. Strong analytical mindset and ability to handle large-scale data processing efficiently. Ability to work independently in a fast-paced, product-driven environment. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Software Engineer (Backend) (SDE-1) DViO is one of the largest independent, highly awarded, digital first marketing companies with a team of 175+ people operating across India, Middle East and South East Asia. We are a full-service digital marketing agency with a focus on ROI driven marketing. We are looking for a Software Engineer (Backend) to join our team. The ideal candidate will have a strong background in software development and experience with backend technologies. We are looking for someone who is passionate about backend system design and is looking to grow in this field. Responsibilities You will be working with a team that will be responsible for developing services for various applications, like marketing automation, campaign optimization, recommendation & analytical systems, etc. The candidate will work on developing backend services, including REST APIs, data processing pipelines, and database management. Develop backend services for various business use cases Write clean, maintainable code Collaborate with other team members Improvise code based on feedback Work on bug fixes, refactoring and performance improvements Tracking technology changes and keeping our applications up-to-date Requirements Qualifications: Bachelor's degree in Computer Science, Engineering, or related field 0-1 year of experience in software development Must-have skills: Proficient in either PHP, Python, or Node.js Experience with any backend MVC frameworks like Laravel, Rails, Express, Django etc. Experience with any database like MySQL, PostgreSQL, MongoDB, etc. Experience with REST APIs, Docker, Bash and Git Good-to-have skills: Experience with WebSockets, Socket.io, etc. Experience with search technologies like Meilisearch, Typesense, Elasticsearch, etc. Experience with caching technologies like Redis, Memcached, etc. Experience with cloud platforms like AWS, GCP, Azure, etc. Experience with monolithic architecture Experience with data warehouses or data lakes like Snowflake, Amazon Redshift, Google BigQuery, Databricks, etc. Benefits DViO offers innovative and challenging work environment with the opportunity to work on cutting-edge technologies. Join us and be a part of a dynamic team that is passionate about software development and build applications that will shape the future of digital marketing. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Role: Database Engineer Location : Remote Skills and Experience ● Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. ● Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. ● Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. ● Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. ● Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). ● Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. ● Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. ● Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. ● Knowledge of SQL and understanding of database design principles, normalization, and indexing. ● Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. ● Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. ● Eagerness to develop import workflows and scripts to automate data import processes. ● Knowledge of data security best practices, including access controls, encryption, and compliance standards. ● Strong problem-solving and analytical skills with attention to detail. ● Creative and critical thinking. ● Strong willingness to learn and expand knowledge in data engineering. ● Familiarity with Agile development methodologies is a plus. ● Experience with version control systems, such as Git, for collaborative development. ● Ability to thrive in a fast-paced environment with rapidly changing priorities. ● Ability to work collaboratively in a team environment. ● Good and effective communication skills. ● Comfortable with autonomy and ability to work independently. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

About Qualitrol Qualitrol is a leader in providing condition monitoring solutions for the electricity industry, ensuring reliability and efficiency in high-voltage electrical assets. We leverage cutting-edge technology, data analytics, and AI to transform how utilities manage their assets and make data-driven decisions. Role Summary We are looking for a highly skilled Senior Data Engineer to join our team and drive the development of our data engineering capabilities. This role involves designing, developing, and maintaining scalable data pipelines, optimizing data infrastructure, and ensuring high-quality data for analytics and AI-driven solutions. The ideal candidate will have deep expertise in data modeling, cloud-based data platforms, and best practices in data engineering. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines for large-scale industrial data. Architect and maintain data warehouses, lakes, and streaming solutions to support analytics and AI-driven insights. Implement data governance, security, and quality best practices to ensure data integrity and compliance. Work closely with Data Scientists, AI Engineers, and Software Developers to build robust data solutions. Optimize data infrastructure performance for real-time and batch processing. Leverage cloud-based technologies (AWS, Azure, GCP) to develop and deploy scalable data solutions. Develop and maintain APIs and data access layers for seamless integration across platforms. Collaborate with cross-functional teams to define and implement data strategy and architecture. Stay up to date with emerging data engineering technologies and best practices. Required Qualifications & Experience 5+ years of experience in data engineering, software development, or related fields. Proficiency in programming languages such as Python, Scala, or Java. Expertise in SQL and database technologies (PostgreSQL, MySQL, NoSQL, etc.). Hands-on experience with big data technologies (e.g., Spark, Kafka, Hadoop). Strong understanding of data warehousing (e.g., Snowflake, Redshift, BigQuery) and data lake architectures. Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data solutions. Knowledge of CI/CD pipelines, DevOps, and infrastructure as code (Terraform, Kubernetes, Docker). Familiarity with ML Ops and AI-driven data workflows is a plus. Strong problem-solving skills, ability to work independently, and excellent communication skills. Preferred Qualifications Experience in the electricity, utilities, or industrial sectors. Knowledge of IoT data ingestion and edge computing. Familiarity with GraphQL and RESTful API development. Experience in data visualization and business intelligence tools (Power BI, Tableau, etc.). Contributions to open-source data engineering projects. What We Offer Competitive salary and performance-based incentives. Comprehensive benefits package, including health, dental, and retirement plans. Opportunities for career growth and professional development. A dynamic work environment focused on innovation and cutting-edge technology. Hybrid/remote work flexibility (depending on location and project needs). How To Apply Interested candidates should submit their resume and a cover letter detailing their experience and qualifications. Fortive Corporation Overview Fortive’s essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. We’re a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potential—your ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. We’re honest about what’s working and what isn’t, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Qualitrol QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customers’ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Bonus or Equity This position is also eligible for bonus as part of the total compensation package. QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customers’ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. This position is also eligible for bonus as part of the total compensation package. Show more Show less

Posted 1 week ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies