Jobs
Interviews

3678 Redshift Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

2 - 9 Lacs

Hyderābād

On-site

Senior Consultant – Global Employer Services Technology Center (GESTC) – Analytics/Reporting Deloitte Tax Services India Private Limited (“Deloitte Tax in India”) commenced operations in June 2004. Since then, nearly all of the Deloitte Tax LLP (“Deloitte Tax”) U.S. service lines and regions have obtained support services through Deloitte Tax in India. We provide support through the tax transformation taking place in the marketplace. We offer a broad range of fully integrated tax services by combining technology and tax technical resources to uncover insights and smarter solutions for navigating an increasingly complex global environment. We provide opportunities to transform tax operations using contemporary technologies in the market. Individuals work to transform their current state of tax to the next generation of tax functions. Are you ready to take the next step in your career to find new methods and processes to assist clients in improving their tax operations using new technologies? If the answer is “Yes,” come join Global Employer Services Technology Center (GESTC) Job purpose: As a senior in the Analytics team, you will lead the design, development, and delivery of scalable, high-quality reporting and dashboarding solutions using industry-leading analytics tools and AWS native technologies. You will collaborate closely with analysts, developers, and business stakeholders to build robust, user-friendly reporting systems that drive actionable insights and support business objectives. This role requires a strong technical foundation, attention to detail, and the ability to manage multiple priorities in a dynamic environment. You will also provide technical leadership, mentor junior team members, and support the team in day-to-day workload management. Key job responsibilities: 5+ years of relevant experience in analytics, reporting, or data engineering roles. Strong problem-solving, communication, and team collaboration skills. Ability to work independently and manage multiple priorities effectively. Lead the end-to-end development of analytics solutions, including reporting, dashboarding, and data integration projects. Design, implement, and optimize reports and dashboards using SSRS/Bold Reports, QlikView/Qlik Sense, and other analytics tools. Develop and maintain data pipelines and integrations using PL/SQL, REST APIs, and AWS services (S3, Lambda, EC2, Glue ETL, Redshift). Apply programming skills in C# or Python to support custom analytics and automation requirements. Oversee database design, data modeling, and query performance tuning for relational and data warehouse environments. Implement and manage CI/CD pipelines, leveraging Azure DevOps for code management, RDL promotion, and deployment automation. Support and maintain legacy analytics and ETL solutions built on SSIS, SQL Server, and Oracle, and lead efforts to migrate these solutions to the AWS stack as needed. Collaborate with cross-functional teams to ensure solutions meet business requirements, quality standards, and project timelines. Mentor and guide junior team members, providing technical oversight and support. Education/Background: Bachelor / Master degree in Computer Science Engineering, or a related field. Key skills desired Must have: Hands-on experience with analytics/reporting/dashboarding tools (SSRS/Bold Reports and QlikView, Qlik Sense preferred). Strong knowledge of PL/SQL and relational databases. Experience implementing REST APIs. Proficiency in at least one programming language (C# or Python). Practical experience with AWS cloud services (S3, Lambda, EC2, Glue ETL). Experience with AWS Redshift and data warehousing concepts. Experience with SQL and writing complex SQL queries. Familiarity with CI/CD processes and tools (Azure DevOps preferred). Excellent written and verbal communication skills. Ability to interact and collaborate with individuals at all levels of the organization. Strong critical thinking and problem-solving abilities. Proven track record of consistently meeting project expectations and deadlines. Good to have: Experience with containerization (Kubernetes/AWS EKS, Docker). Exposure to managed Kafka streaming on AWS. Understanding of Agile development methodologies. Experience with SSRS, database design/modeling, and query optimization. Knowledge of .NET API development and testing. Experience with legacy ETL and database technologies, including SSIS, SQL Server, and Oracle, with the ability to support and migrate these solutions to AWS. #CA-GSD #CA-HPN Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 307125

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Amazon has an exciting opportunity for a Business Intelligence Engineer to join our online retail team. The Retail team operates as a merchant in Amazon, the team owns functions like merchandising, marketing, inventory management, vendor management and program management as core functions. In this pivotal role, you’ll be supporting these functions with business intelligence you derive from our vast array of data and will play a role in the long term growth and success of Amazon in the APAC region. You will be working with stakeholders from Pricing Program to contribute to Amazon’s Pricing strategies, partnering with Vendor and Inventory managers to help improve product cost structures, supporting the marketing team to build their strategies by using extremely large volumes of complex data. You will be exploring datasets, writing complex SQL queries, building data pipelines and data visualization solutions with AWS Quicksight. You will be also building new Machine Learning models to predict the outcomes of key inputs. Key job responsibilities As a BI Engineer in the APAC Retail BI team, you will build constructive partnerships with key stakeholders that enable your business understanding and ability to develop true business insights and recommendations. You’ll have the opportunity to work with other BI experts locally and internationally to identify to learn and develop best practices, always applying a data- driven approach. Amazon is widely known for our obsession over customers. In this role your stakeholders will be counting on you to help us understand customer behaviour and improve our offerings. This role does include periodic reporting responsibilities, but it’s really much more diverse than that. If this role is right for you, you will enjoy the challenge of pivoting between ad-hoc pieces of analysis, reporting enhancement, new builds as well as working on long-term strategic projects to enhance the BI & Analytics capabilities in Amazon. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A3021847

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

The Consulting Data Engineer role requires experience in both traditional warehousing technologies (e.g. Teradata, Oracle, SQL Server) and modern database/data warehouse technologies (e.g., AWS Redshift, Azure Synapse, Google Big Query, Snowflake), as well as expertise in ETL tools and frameworks (e.g. SSIS, Azure Data Factory, AWS Glue, Matillion, Talend), with a focus on how these technologies affect business outcomes. This person should have experience with both on-premise and cloud deployments of these technologies and in transforming data to adhere to logical and physical data models, data architectures, and engineering a dataflow to meet business needs. This role will support engagements such as data lake design, data management, migrations of data warehouses to the cloud, and database security models, and ideally should have experience in a large enterprise in these areas. Develops high performance distributed data warehouses, distributed analytic systems and cloud architectures Participates in developing relational and non-relational data models designed for optimal storage and retrieval Develops, tests, and debugs batch and streaming data pipelines (ETL/ELT) to populate databases and object stores from multiple data sources using a variety of scripting languages; provide recommendations to improve data reliability, efficiency and quality Works along-side data scientists, supporting the development of high-performance algorithms, models and prototypes Implements data quality metrics, standards, guidelines; automates data quality checks / routines as part of data processing frameworks; validates flow of information Ensures that Data Warehousing and Big Data systems meet business requirements and industry practices including but not limited to automation of system builds, security requirements, performance requirements and logging/monitoring requirements , Knowledge, Skills, And Abilities Ability to translate a logical data model into a relational or non-relational solution Expert in one or more of the following ETL tools: SSIS, Azure Data Factory, AWS Glue, Matillion, Talend, Informatica, Fivetran Hands on experience in setting up End to End cloud based data lakes Hands-on experience in database development using views, SQL scripts and transformations Ability to translate complex business problems into data-driven solutions Working knowledge of reporting tools like Power BI , Tableau etc Ability to identify data quality issues that could affect business outcomes Flexibility in working across different database technologies and propensity to learn new platforms on-the-fly Strong interpersonal skills Team player prepared to lead or support depending on situation"

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 5 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Company JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services – including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Andhra Pradesh, India

On-site

Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 16 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

We would be looking at some of the folks who have both expert skills in Talend and Pyspark. Engineers Skillset: 1. Talend Studio 2. PySpark 3. AWS Ecosystem (S3, Glue, CloudWatch, SSM, IAM etc) 4. Redshift 5. Aurora 6. Terradata Talend Skillset: 1. Good hands-on experience with Talend Studio and Talend Management Console (TMC) 2. In-depth understanding of Joblets, PreJobs, PostJobs, and SubJobs, with the ability to go through complex designs and understand Talend's data flow and control flow logic 3. Proficient in working with Talend components for S3, Redshift, tDBInput, tMap etc and various Java-based components Pyspark: 4. Solid knowledge of PySpark with the ability to compare, analyze and validate migrated PySpark code against Talend job definitions for accurate code migration.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Engineer role at Konex FinTech Consulting and Services Pvt Ltd , 🔹 We're Hiring: Data Engineer 📍 Location: PAN India (Client Locations) 🕒 Experience: 6 to 8 years 💼 Mode of Work: Hybrid (3 Days Work From Office) 📣 Notice Period: Immediate Joiners Preferred 💰 Budget: As per Market Standards Konex FinTech Consulting and Services Pvt Ltd is looking for a highly skilled and motivated Data Engineer to join our client’s dynamic IT team. If you have a strong command over data pipelines, cloud technologies (AWS a must), and critical communication skills – we want to hear from you! 🔧 Key Responsibilities: Design and manage scalable real-time/batch data pipelines Develop ETL processes and ensure high-quality data flows Build and maintain data lakes, warehouses, and related infrastructure Optimize systems for scalability and performance Collaborate with cross-functional teams to understand data requirements Implement data governance, privacy, and security best practices Troubleshoot data issues ensuring high system availability Document data architecture and system designs ✅ Required Skills: Bachelor’s or Master’s in Computer Science or related field Strong SQL and hands-on with Python/Java/Scala ETL tools & orchestration (AWS Glue, Airflow, Talend etc.) Cloud expertise with AWS (S3, Redshift); bonus for GCP or Azure Experience with Hadoop, Spark, Kafka Strong understanding of data warehousing and modeling Familiarity with Git, CI/CD Excellent communication and problem-solving abilities 📩 Interested candidates, please share the following details along with your resume to hr@konexcs.com by 5 pm 22nd July, 2025 • Current CTC • Expected CTC • Notice Period (If immediate, mention LWD) • Current Location • UAN Details 📞 For more information and act quickly on the request please, feel free to contact us at +91 9059132299 Join us and be a part of delivering impactful data-driven solutions! #hiring #dataengineer #AWS #ETL #datajobs #Konex #analytics #cloudengineering #immediatejoiners #ITjobs #panindia #hybridwork

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About the Team The Analytics Engineering team at DoorDash is embedded within the Analytics and Data Engineering Orgs, and is responsible for building internal data products that scale decision-making across business teams and drive efficiency in our operations. Data is fundamental to DoorDash's success, and this team plays a critical role in enabling high-impact, data-driven solutions across Product, Operations, Finance, and more. About the Role As an Analytics Engineer, you'll play a key role in building and scaling the data foundations that enable fast, reliable, and actionable insights. You'll work closely with partner teams to drive end-to-end analytics initiatives; working alongside Data Engineers, Data Scientists, Software Engineers, Product Managers, and Operators. This is a highly technical role where you'll be a driving force behind the analytics stack, delivering trusted data and metrics that support decision-making at all levels of the company. If you're energized by solving technical problems with data and comfortable being deeply embedded across several domains, this role is for you! You're excited about this opportunity because you will… Collaborate with data scientists, data engineers, and business stakeholders to understand business needs, and translate that scope into data requirements Identify key business questions and problems to solve for, and generate insights by developing structured solutions to resolve them Lead the development of data products and self-serve tools that enable analytics to scale across the company Build and maintain canonical datasets by developing high-volume, reliable ETL/ELT pipelines using data lake and data warehousing concepts Design metrics and data visualizations with dashboarding tools like Tableau, Sigma, and Mode Be a cross-functional champion at upholding high data integrity standards to increase reusability, readability and standardization We're excited about you because… 5+ years of experience working in business intelligence, analytics engineering, data engineering, or a similar role Strong proficiency in SQL for data transformation, comfort in at least one functional/OOP language such as Python or Scala Expertise in creating compelling reporting and data visualization solutions using dashboarding tools (e.g., Looker, Tableau, Sigma) Familiarity with database fundamentals (e.g., S3, Trino, Hive, Spark), and experience with SQL performance tuning Experience in writing data quality checks to validate data integrity (e.g., Pydeequ, Great Expectations) Excellent communication skills and experience working with technical and non-technical teams Comfortable working in fast fast-paced environment, self-starter, and self-organizer Ability to think strategically, analyze, and interpret market and consumer information Nice to Have Experience with modern data warehousing platforms (e.g., Snowflake, Databricks, Redshift) and ability to optimize performance Experience building multi-step ETL jobs coupled with orchestrating workflows (e.g. Airflow, Dagster, DBT) Familiarity with experimentation concepts like A/B testing and their data requirements Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Analyst Purpose of the Position: This position involves performing feasibility and impact assessments, reviewing documentation to ensure conformity to methods, designs, and standards, and achieving economies of scale during the support phase. As a senior Business Analyst, you will also be responsible for stakeholder communication, conducting primary and secondary research based on solution and project needs, supporting new solution design and other organizational initiatives, and collaborating with all stakeholders in a multi-disciplinary team environment to build consensus on various data and analytics projects. Work Location : Pune/ Nagpur/ Chennai/ Bangalore Type of Employment: Full time Key Result Areas and Activities: Data Analysis and Orchestration: Analyze data from various sources, create logical mappings using standard data dictionaries/business logic, and build SQL scripts to orchestrate data from source (Redshift) to target (Treasure Data - CDP). Stakeholder Engagement and Requirement Gathering: Engage with stakeholders to gather business requirements, particularly for data integration projects, ensuring clear communication and understanding. Communication and Team Collaboration: Demonstrate excellent communication skills and strong teamwork, contributing effectively to the team’s success. Stakeholder Management: Manage relationships with stakeholders across different levels, departments, and geographical locations, ensuring effective liaison and coordination. Data Modelling and ETL Processes: Utilize excellent data modelling skills (including RDBMS concepts, normalization, dimensional modelling, star/snowflake schema) and possess sound knowledge of ETL/data warehousing processes and data orchestration. Must Have: Well versed/expert with Data Analysis using SQL. Experienced in building data orchestration SQL queries. Experience in working with business process & data engineering teams to understand & build Business logic, Data mapping as per the business logic, Building SQL orchestration scripts on top of logical data mapping. Ability to review systems and map business processes. Process and Data modelling. Good To Have: Experience with Business and data analysis in Pharmaceutical/Biotech Industry. Excellent Data Modelling skills (RDBMS concepts, Normalisation, dimensional modelling, star/snowflake schema etc). Hands on knowledge with data querying, data analysis, data mining, reporting and analytics will be a plus. Qualifications: 3+ years of experience as core Data Analyst or Business Analyst focused on data integration/orchestration. Bachelor’s degree in computer science, engineering, or related field (Master’s degree is a plus). Demonstrated continued learning through one or more technical certifications or related methods. Qualities: Self-motivated and focused on delivering outcomes for a fast-growing team and firm. Able to communicate persuasively through speaking, writing, and client presentations. Able to work with teams and clients in different time zones. Research focused mindset.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus. Skills: data modeling,etl/elt,version control,data science,gcp,postgresql, mysql, sql server,azure,scala,apache spark,aws,nosql databases,workflow orchestration,containerization,sql,java,etl/elt tools,cloud services,data streaming,data warehousing,python,python, java, scala

Posted 2 weeks ago

Apply

3.0 years

5 - 8 Lacs

Thiruvananthapuram Taluk, India

On-site

Job Title: Data Engineer 📍 Location: Trivandrum (Hybrid) 💼 Experience: 3+ Years 💰 Salary: Up to ₹8 LPA Job Summary We are hiring a skilled Data Engineer with 3+ years of experience to design, build, and optimize data pipelines and infrastructure. You will work closely with data scientists, analysts, and engineers to ensure reliable, scalable, and secure data delivery across cloud and on-prem systems. Key Responsibilities Design and develop ETL/ELT pipelines and implement data models for analytics/reporting. Build and maintain data APIs, ensuring data availability, security, and compliance. Develop and optimize data infrastructure on AWS, Azure, or GCP. Collaborate with stakeholders to gather requirements and deliver scalable data solutions. Monitor pipelines for performance and data quality; implement alerts and issue resolution. Participate in code reviews, documentation, and enforce engineering best practices. Mandatory Skills & Qualifications Bachelor’s in Computer Science, Engineering, or related field. 3+ years in Data Engineering or similar role. Strong knowledge in Python/Java/Scala, SQL, and relational databases (PostgreSQL, MySQL, etc.). Experience with ETL/ELT, data warehousing (Snowflake, Redshift, BigQuery, Synapse). Cloud experience in AWS, Azure, or GCP (e.g., S3, Glue, Data Factory, BigQuery). Knowledge of data modeling (Kimball/Inmon). Version control using Git. Strong problem-solving, communication, and collaboration skills. Preferred (Nice To Have) Master’s degree. Experience with Apache Spark, Kafka/Kinesis, NoSQL (MongoDB/Cassandra). Familiarity with Docker/Kubernetes, Airflow, and DevOps for data pipelines. Experience in Telecom domain is a plus. Skills: gcp,kinesis,aws,airflow,sql,pipelines,version control,elt,scala,docker,etl,apache spark,devops,azure,python,data modeling,java,design,data,infrastructure,git,nosql,data warehousing,kubernetes,cloud,kafka

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be working as a Business Intelligence Engineer III in Pune on a 6-month Contract basis with the TekWissen organization. Your primary responsibility will be to work on Data Engineering on AWS, including designing and implementing scalable data pipelines using AWS services such as S3, AWS Glue, Redshift, and Athena. You will also focus on Data Modeling and Transformation by developing and optimizing dimensional data models to support various business intelligence and analytics use cases. Additionally, you will collaborate with stakeholders to understand reporting and analytics requirements and build interactive dashboards and reports using visualization tools like the client's QuickSight. Your role will also involve implementing data quality checks and monitoring processes to ensure data integrity and reliability. You will be responsible for managing and maintaining the AWS infrastructure required for the data and analytics platform, optimizing performance, cost, and security of the underlying cloud resources. Collaboration with cross-functional teams and sharing knowledge and best practices will be essential for identifying data-driven insights. As a successful candidate, you should have at least 3 years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficiency in designing and implementing data pipelines using AWS services like S3, Glue, Redshift, Athena, and Lambda is mandatory. You should also possess expertise in data modeling, dimensional modeling, data transformation techniques, and experience in deploying business intelligence solutions using tools like QuickSight and Tableau. Strong SQL and Python programming skills are required for data processing and analysis. Knowledge of cloud architecture patterns, security best practices, and cost optimization on AWS is crucial. Excellent communication and collaboration skills are necessary to effectively work with cross-functional teams. Hands-on experience with Apache Spark, Airflow, or other big data technologies, as well as familiarity with AWS DevOps practices and tools, agile software development methodologies, and AWS certifications, will be considered as preferred skills. The position requires a candidate with a graduate degree and TekWissen Group is an equal opportunity employer supporting workforce diversity.,

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Req ID: 323795 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Tableau Admin to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Tableau Admin Reporting to the Manager of Business Intelligence, the position is to provide application deployment support, maintenance, modernization, and primary research of technical issues. The Tableau Administrator may also assist with special projects for clients such as researching data issues, researching and implementing new integrations, or developing and implementing processes to manipulate data per customer requests. Primary Duties & Responsibilities Install, configure, and maintain a multi- tier Tableau infrastructure environment to ensure reliable performance and scalability Experience with automation for tasks like deployment of data sources/workbooks across environment, alerts via scripting (PowerShell). Assist in development of workbooks and dashboards to support business initiatives Experience in embedded analytics environment and integration of tableau and Salesforce Experienced with Tableau server Metadata and Admin views Manage and support migration of workbooks, dashboards from development to production environments. Experience in Landing Pages and applying Role Level Security to data source Work closely with Tableau vendor for product fix and enhancements Develop and document standards/best practices for development and administration of the Tableau platform Monitor server activity/usage statistics to identify possible performance issues/enhancements. Work with cross-functional teams on the day-to-day execution of data & analytics projects and initiatives Work very closely with Data Architects, DBAs, Security team in setting up the environment that meets the needs of users Troubleshoot and optimize Tableau Dashboards/Workbooks, extraction, etc. Build best practices and performance guides Solving user issues faced in different Tableau applications developed in Tableau Desktop Work with Cloud Computing Platforms (AWS) and other open-source technologies Good understanding of Active Directory, DNS, Load Balancer and Network firewall Experience configuring Single Sign-on (SSO) with SAML authentication and identity provider (IdP) Experience in data source like Redshift, Athena, Aurora, MS SQL Server Good Experience and writing SQL queries Demonstrate high personal standards of behavior in a professional environment. Demonstrate credibility, competence, and very proactive. Self-motivated; willingness & strong desire to learn Hands on experience in Installing and configuring QlikView Server and Publisher Maintaining Server Logs, QlikView Server Backup Required Skills Supported Applications: Tableau (Admin/Desktop/Prep), QlikView and Informatica Primary skill should be Tableau Administration with some Tableau development experience Knowledge on QlikView and Informatica power center/ cloud will be preferred. Good experience on SQL queries is preferred AWS cloud experience preferred Ability to task switch effectively based on business priorities Team lead with strong communication and interpersonal skills Strong organizational skills along with demonstrated ability to manage multiple tasks simultaneously Communicate effectively with technical/non-technical audiences Self-motivated and Driven Can-do attitude: contributing to key improvements and innovations Required Knowledge & Experience Education & Work Experience required Bachelor's degree from College or University with at least 2 years of focused Tableau administration experience and total 4+ years of Tableau experience in an IT Department, preferably within a Business Intelligence Team and/or equivalent combination of education and work experience. Strong technical, analytical and communication skills Insurance industry experiences in life, health and annuity a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

P2-C3-STS Primary Skill AWS CDK with Type Script, Cloud formation template. AWS services like Redshift, glue, IAM roles, KMS keys, Secrets Manager, Airflow, SFTP, AWS Lambda, S3, event triggers using Lambda. Knowledge on working with AWS Redshift sql workbench on executing grants. Expert with 10+ years of hands-on experience AWS CDK with Type Script Cloud formation template. AWS services like Redshift, glue, IAM roles, KMS keys, Secrets Manager, Airflow, SFTP, AWS Lambda, S3, event triggers using Lambda. Knowledge on working with AWS Redshift sql workbench on executing grants.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana

On-site

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00002145 Information Technology Job Type Full-Time Posted Date 07/22/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities This role will be responsible for collaborating with the Business and IT teams to identify the value, scope, features and delivery roadmap for data engineering products and solutions. Responsible for communicating with stakeholders across the board, including customers, business managers, and the development team to make sure the goals are clear and the vision is aligned with business objectives. Perform data analysis using SQL Data Quality Analysis, Data Profiling and Summary reports Trend Analysis and Dashboard Creation based on Visualization technique Execute the assigned projects/ analysis as per the agreed timelines and with accuracy and quality. Complete analysis as required and document results and formally present findings to management Perform ETL workflow analysis, create current/future state data flow diagrams and help the team assess the business impact of any changes or enhancements Understand the existing Python code work books and write pseudo codes Collaborate with key stakeholders to identify the business case/value and create documentation. Should have excellent communication and analytical skills. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications - External Required BE, BTECH or equivalent, in computer science or related STEM field 5+ years of total IT experience as either a Data Analyst/Business Data Analyst or as a Data Engineer 2+ years of experience with Big Data technologies like PySpark, Hadoop, Redshift etc. 3+ years of experience with writing SQL queries on RDBMS or Cloud based database Experience with Visualization tools such as Spotfire, PowerBI, Quicksight etc Experience in Data Analysis and Requirements Gathering Strong problem-solving skills Creative, driven, detail-oriented focus, requiring tackling of tough problems with data and insights. Natural curiosity and desire to solve problems. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Qualifications Preferred AWS Certification preferred Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana

On-site

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001785 Information Technology Job Type Full-Time Posted Date 07/22/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values: At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities United Airlines is seeking talented people to join the Data Engineering team as Sr AWS Redshift DBA. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. As a Redshift DBA, you will engage with Data Engineering and DevOps team (including internal customers) for redshift database administration initiatives. You will be involved in Database administration, performance tuning, management and security of AWS Redshift deployment and enterprise data warehouse You will provide technical support for all database environments including development/pre-production/production databases for the organization and will responsible for setting up of infrastructure, configuring, maintenance of environment including security to support cloud environment working along with DE and Cloud Engineering Develop and implement innovative solutions leading to automation Mentor and train junior engineers This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law.. Qualifications We are seeking Dedicated and skilled Redshift Database Administrator with a proven track record of optimizing database performance, ensuring data security, and implementing scalable solutions. Seeking to leverage expertise in Redshift and AWS to drive efficiency and reliability in a dynamic organization. Required BS/BA, in computer science or related STEM field Individuals who have a natural curiosity and desire to solve problems are encouraged to apply. 4+ years of IT experience preferably in Redshift Database Administration, SQL & Query optimization. Must have experience in performance Tuning & Monitoring. 3+ years of experience in scripting (Python, Bash) Database Security & Compliance. 4+ years in AWS Redshift production environment. 3+ years of experience with relational database systems like Oracle, Teradata 2+ years’ experience with Cloud migration of existing apps to AWS (on premise) Excellent and proven knowledge of Postgres/SQL on Amazon RDS Excellent and proven knowledge of SQL Must have managed Redshift clusters, including provisioning, monitoring, and performance tuning to ensure optimal query execution. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Master’s in Computer Science or related STEM field Experience with cloud-based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data

Posted 2 weeks ago

Apply

0.0 - 18.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Job ID 30181669 Job Category Digital Technology Job Title – Senior Product Manager Preferred Location - Bangalore India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities: Data Strategy & Architecture Develop and execute the enterprise data roadmap in alignment with business and IT objectives. Define best practices for data governance, storage, processing, and analytics. Promote standardized data management patterns, including data lakes, data warehouses, and real-time processing architectures. Ensure data solutions support both legacy and cloud-native systems while maintaining security, scalability, and efficiency. Product Leadership & Execution Define and prioritize data product features, ensuring alignment with business goals. Work with cross-functional teams, including data engineering, business intelligence, and application development, to deliver scalable data solutions. Oversee the full lifecycle of data products, from design and development to deployment and monitoring. Evaluate emerging technologies and tools that enhance data capabilities, including AI/ML and advanced analytics. Governance & Operational Excellence Establish governance policies for data quality, security, and compliance in alignment with regulatory standards. Implement monitoring, logging, and alerting mechanisms to ensure data integrity and availability. Drive automation in data quality testing and deployment using CI/CD practices. Collaborate with security and compliance teams to ensure data protection and privacy compliance. Role Purpose: The Data Product Manager will be responsible for defining and executing the strategy for enterprise data solutions. This role will oversee the development, implementation, and optimization of data pipelines, governance frameworks, and analytics capabilities to support seamless data access and utilization across enterprise applications. The successful candidate will collaborate closely with engineering, business stakeholders, and IT teams to ensure data interoperability, security, and scalability. This role will drive initiatives that enhance real-time analytics, data-driven decision-making, and overall digital transformation efforts. Minimum Requirements: 12 - 18 years of overall experience & 7+ years in data management, product management, or enterprise architecture roles. Proven expertise in designing and managing data solutions, including data lakes, data warehouses, and ETL/ELT pipelines. Hands-on experience with data platforms such as Snowflake, Redshift, BigQuery, Databricks, or similar. Strong understanding of data governance, security, and compliance frameworks. Experience integrating SaaS and on-prem systems, including ERP, CRM, and HR platforms (SAP, Salesforce, Workday, etc.). Familiarity with DevOps and CI/CD tools like GitHub Actions, AWS CodePipeline, or Jenkins. Experience with real-time and batch data movement solutions, including AWS DMS, Qlik Replicate, or custom CDC frameworks. Exposure to data mesh architectures, advanced analytics, or AI/ML-driven data products. Hands-on experience with Python, SQL, JSON/XML transformations, and data testing frameworks. Certifications in AWS, data platforms, or enterprise data management Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Job ID 30181213 Job Category Digital Technology Job Title – Data Lakehouse Platform Architect/ Engineer Preferred Location - Bangalore/Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities: Lead the architecture and engineering of data lakehouse platforms using Apache Iceberg on AWS S3 , enabling scalable storage and multi-engine querying. Design and build infrastructure-as-code solutions using AWS CDK or Terraform to support repeatable, automated deployments. Deliver and optimize ELT/ETL data pipelines for real-time and batch workloads with AWS Glue, Apache Spark, Kinesis , and Airflow . Enable compute engines (Athena, EMR, Redshift, Trino, Snowflake) through efficient schema design, partitioning, and metadata strategies. Champion observability and operational excellence across the platform by implementing robust monitoring, alerting, and logging practices. Drive automation through CI/CD pipelines using GitHub Actions, CircleCI, or AWS CodePipeline , improving deployment speed and reliability. Partner cross-functionally with data engineers, DevOps, security, and FinOps teams to align platform features to evolving business needs. Provide thought leadership on open standards, cost optimization, and scaling data platform capabilities to support AI/ML and analytics initiatives. Role Purpose: 14+ years of experience in data engineering, cloud infrastructure, or platform engineering roles, with at least 3 years in a senior or lead capacity. Expert-level experience with AWS services (S3, Glue, Kinesis, IAM, CloudWatch, EMR). Strong working knowledge of Apache Iceberg or similar open table formats (e.g., Delta Lake, Hudi). Proficiency in Python , with the ability to build infrastructure, automation, and data workflows. Demonstrated experience designing data lakehouse architectures supporting large-scale analytics and ML use cases. Hands-on experience with CI/CD pipelines, infrastructure-as-code, and cloud-native automation tooling. Strong understanding of data governance principles, schema evolution, partitioning, and access controls. Minimum Requirements: Familiarity with AWS Lake Formation, Snowflake, Databricks, or Trino. Experience optimizing cloud cost and performance throughFinOps practices. Prior experience contributing to platform strategy or mentoring junior engineers. Understanding of security, compliance, and operational controls in regulated enterprise environments. Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

DevOn is a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics. We are passionate about solving complex business problems through technology and currently seeking a skilled and motivated Data Engineer Lead to join our dynamic team. As a Data Engineer Lead at DevOn, your primary responsibility will be to lead the design, development, and maintenance of data pipelines and ETL workflows, utilizing modern cloud technologies. You will collaborate closely with cross-functional teams to ensure data availability, reliability, and scalability, facilitating data-driven decision-making throughout the organization. This role necessitates a profound understanding of Python, PySpark, AWS Glue, RedShift, SQL, Jenkins, Bitbucket, EKS, and Airflow. Key Responsibilities: - Lead the design and implementation of scalable data pipelines and ETL workflows in a cloud environment, primarily AWS. - Develop and manage data ingestion, transformation, and storage frameworks using AWS Glue, PySpark, and RedShift. - Architect and enhance complex SQL queries for large datasets, ensuring data integrity across systems. - Work collaboratively with data scientists, analysts, and business stakeholders to comprehend data requirements and deliver high-quality data solutions. - Automate the end-to-end data pipeline process using Jenkins and Bitbucket, promoting efficient CI/CD practices. - Manage and optimize data orchestration utilizing Apache Airflow. - Provide technical leadership and mentorship to junior team members, ensuring adherence to best practices in data engineering. - Utilize AWS services such as RedShift, S3, Lambda, and EKS for deploying and managing data solutions. - Resolve complex data pipeline issues promptly, ensuring minimal downtime and high availability. - Participate in architecture and design reviews, contributing insights on technical solutions and enhancements. - Continuously assess new tools and technologies to enhance the efficiency and scalability of our data infrastructure. Required Skills and Qualifications: - 5+ years of professional experience in Data Engineering, demonstrating expertise in building scalable data pipelines and ETL workflows. - Proficiency in Python for data processing and scripting. - Hands-on experience with PySpark for large-scale data processing. - Thorough knowledge of AWS Glue, RedShift, S3, and other AWS services. - Advanced skills in SQL for data manipulation and optimization. - Experience with Jenkins and Bitbucket for CI/CD automation. - Familiarity with EKS (Elastic Kubernetes Service) for containerized deployment of data applications. - Proficiency in Apache Airflow for data orchestration and workflow automation. - Strong problem-solving abilities and adeptness in debugging complex data workflow issues. - Excellent communication skills to collaborate effectively with cross-functional teams and articulate technical concepts clearly. - Ability to work in an Agile development environment, managing multiple priorities and meeting tight deadlines. Preferred Qualifications: - Experience with additional AWS services (e.g., Lambda, Redshift Spectrum, Athena). - Familiarity with Docker and container orchestration technologies like Kubernetes. - Knowledge of data modeling and data warehousing concepts. - Bachelor's or Master's degree in Computer Science, Engineering, or a related field.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

As a Cloud & AI Solution Engineer at Microsoft, you will be part of a dynamic team that is at the forefront of innovation in the realm of databases and analytics. Your role will involve working on cutting-edge projects that leverage the latest technologies to drive meaningful impact for commercial customers. If you are insatiably curious and deeply passionate about tackling complex challenges in the era of AI, this is the perfect opportunity for you. In this role, you will play a pivotal role in helping enterprises unlock the full potential of Microsoft's cloud database and analytics stack. You will collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics. Your responsibilities will include hands-on engagements such as Proof of Concepts, hackathons, and architecture workshops to guide customers through secure, scalable solution design and accelerate database and analytics migration into their deployment workflows. To excel in this position, you should have at least 10+ years of technical pre-sales or technical consulting experience, or a Bachelor's/Master's Degree in Computer Science or related field with 4+ years of technical pre-sales experience. You should be an expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) and Azure Analytics (Fabric, Azure Databricks, Purview), as well as competitors in the data warehouse, data lake, big data, and analytics space. Additionally, you should have experience with cloud and hybrid infrastructure, architecture designs, migrations, and technology management. As a trusted technical advisor, you will guide customers through solution design, influence technical decisions, and help them modernize their data platform to realize the full value of Microsoft's platform. You will drive technical sales, lead hands-on engagements, build trusted relationships with platform leads, and maintain deep expertise in Microsoft's Analytics Portfolio and Azure Databases. By joining our team, you will have the opportunity to accelerate your career growth, develop deep business acumen, and hone your technical skills. You will be part of a collaborative and creative team that thrives on continuous learning and flexible work opportunities. If you are ready to take on this exciting challenge and be part of a team that is shaping the future of cloud Database & Analytics, we invite you to apply and join us on this journey.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You are a seasoned Data Engineer with expertise in SQL, Python, and AWS, and you are responsible for designing and managing data solutions. Your role involves understanding and analyzing client business requirements, recommending modern data tools, developing and maintaining data pipelines and ETL processes, creating and optimizing data models for client reporting and analytics, ensuring seamless data integration and visualization, communicating with clients for updates and issue resolution, and staying updated on industry best practices and emerging technologies. You should have 3-5 years of experience in data engineering/analytics, proficiency in SQL and Python for data manipulation and analysis, knowledge of Pyspark, experience with data warehouse platforms like Redshift and Google BigQuery, familiarity with AWS services like S3, Glue, Athena, proficiency in Airflow, familiarity with event tracking platforms like GA or Amplitude, strong problem-solving skills, adaptability, excellent communication skills, proactive client engagement, and the ability to collaborate effectively with team members and clients. In return, you will receive a competitive salary with a bonus, employee discounts across all brands, medical and health insurance, a collaborative work environment, and a good vibes work culture.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Technical Lead with over 8 years of experience in Data Engineering, Analytics, and Python development, including at least 3 years in a Technical Lead / Project Management role, you will play a crucial role in driving data engineering and analytics projects for our clients. Your client-facing skills will be essential in ensuring successful project delivery and effective communication between technical and business stakeholders. Your responsibilities will include designing and implementing secure, scalable data architectures on cloud platforms such as AWS, Azure, or GCP. You will lead the development of cloud-based data engineering solutions covering data ingestion, transformation, and storage while defining best practices for integrating diverse data sources securely. Overseeing security aspects of integrations and ensuring compliance with organizational and regulatory requirements will be part of your role. In addition, you will develop and manage robust ETL/ELT pipelines using Python, SQL, and modern orchestration tools, as well as integrate real-time streaming data using technologies like Apache Kafka, Spark Structured Streaming, or cloud-native services. Collaborating with data scientists to integrate AI models into production pipelines and cloud infrastructure will also be a key aspect of your responsibilities. Furthermore, you will work on advanced data analysis to generate actionable insights for business use cases, design intuitive Tableau dashboards and data visualizations, and define data quality checks and validation frameworks to ensure high-integrity data pipelines. Your expertise in REST API development, backend services, and integrating APIs securely will be crucial in developing and deploying data products and integrations. To excel in this role, you must have deep hands-on experience with cloud platforms, expertise in Python, SQL, Spark, Kafka, and streaming integration, proven ability with data warehousing solutions like BigQuery, Snowflake, and Redshift, and a strong understanding of integration security principles. Proficiency in data visualization with Tableau, REST API development, and AI/ML integration will also be essential. Preferred qualifications include prior experience managing enterprise-scale data engineering projects, familiarity with DevOps practices, and understanding of regulatory compliance requirements for data handling. Your ability to lead technical teams, ensure project delivery, and drive innovation in data engineering and analytics will be key to your success in this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Support Engineer at Precisely, you will play a crucial role in driving solutions to complex issues and ensuring the success of our customers. Your technical expertise will be essential in supporting Precisely Data Integration investments, including various products and Sterling B2B Integrator. Your problem-solving skills, technical depth, effective communication, and ability to innovate will be key attributes for excelling in this role. In this position, your responsibilities will include providing top-notch technical support via phone, email, and remote desktop connections, meeting SLA requirements, updating stakeholders promptly, and documenting critical information. You will be tasked with swiftly resolving issues to guarantee customer satisfaction, investigating and solving complex problems across different platforms, software systems, and databases. Your understanding of enterprise systems will be pivotal in identifying the root cause of issues and recommending suitable solutions. Continuous learning and knowledge sharing are integral parts of this role. You will be expected to stay updated on new technologies, tools, and systems and share your insights with the team. Developing comprehensive internal and external Knowledge Base documentation will be essential for enhancing customer and team support. Additionally, you will contribute to debugging, suggesting solutions, and tools for product improvements. Requirements for this role include a Bachelor's or Master's degree in Computer Science or a related field, exceptional communication skills, strong analytical abilities, and a self-motivated approach to problem-solving. A keen interest in learning new technologies, understanding software design principles, and proficiency in database management systems and networking design are essential. Experience with debugging, object-oriented languages, distributed computing, and various technologies will be advantageous. If you are enthusiastic about tackling challenging problems, working under tight deadlines, and providing excellent technical support, this role at Precisely offers a rewarding opportunity to grow and contribute to a leading organization in data integrity.,

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Position Summary The GPS Data & Analytics Software Engineer I role is accountable for developing data solutions. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Analytics and AI Enablement organization. Key Responsibilities The Data Engineer will be responsible for designing, building, delivering and maintaining high quality data products and analytic ready data solutions for GPS Cell Therapy Develop cloud-based (AWS) data pipelines using DBT and Glue Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and stake holders to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and protection through validation, testing, and security protocols Implement and maintain security protocols to protect sensitive data Stay up to date with emerging trends and technologies in data engineering, analytical engineering and analytics and adapt with new technologies Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Work in agile environment Debugging issues on the go Understanding the existing models if required and taking it forward Using JIRA for effort estimation, task tracking and communication about the task Using GIT for version control, quality checks and reviews Proficient Python, Spark, SQL, AWS Redshift, DBT, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles Good to have knowledge/hands on experience in React Js for creating analytics dashboard if required Good to have knowledge about AWS Cloud Formation Templates Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Required: 1-3 years of experience in information technology field in developing AWS cloud native data lakes and ecosystems Understanding of cloud technologies preferably AWS and related services in delivering and supporting data and analytics solutions/data lakes Should have working knowledge of GIT and version control good practices Proficient in Python, Spark, SQL, AWS Services Good to have experience in React.js and Full stack technologies Good to have worked in agile development environment and have used JIRA or similar task tracking & management tools Good to have experience/knowledge of working with DBT Knowledge of data security and privacy best practices Ideal Candidates Would Also Have: Prior experience in global life sciences especially in the GPS functional area will be a plus Experience working internationally with a globally dispersed team including diverse stakeholders and management of offshore technical development team(s) Strong communication and presentation skills Other Qualifications: Bachelor's degree in computer science, Information Systems, Computer Engineering or equivalent is preferred If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies