Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company Description Profile Solution is an innovative supplier of thermal management and airflow solutions for the data center international telecom, and IT markets. The company is headquartered in Mumbai with an office in Singapore. Profile Solution specializes in offering products such as perforated high volume tiles, intelligent active floor tiles, overhead air-movers, air blocks, rack baffles, raised floor partition solutions, thermal testing, and cooling audits analysis. Role Description This is a full-time on-site role for a Sales & Estimation Engineer at Profile Solution in Mumbai. The Sales & Estimation Engineer will be responsible for conducting on-site audits to discover variables in data centers, recommend solutions, and work with clients to implement energy-efficient cooling approaches. The role involves collaborating with top cloud computing companies to provide thermal containment infrastructure solutions. Qualifications Sales and Estimation skills Technical knowledge in thermal management and airflow solutions Experience in conducting on-site audits and analyzing data center variables Strong communication and presentation skills Ability to collaborate effectively with clients and team members Bachelor's degree in Engineering or related field Previous experience in the data center industry is a plus
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: ✅Build and optimize scalable data pipelines using Python, PySpark, and SQL. ✅Design and develop on AWS stack (S3, Glue, EMR, Athena, Redshift, Lambda). ✅Leverage Databricks for data engineering workflows and orchestration. ✅Implement ETL/ELT processes with strong data modeling (Star/Snowflake schemas). ✅Work on job orchestration using Airflow, Databricks Jobs, or AWS Step Functions. ✅Collaborate with agile, cross-functional teams to deliver reliable data solutions. ✅Troubleshoot and optimize large-scale distributed data environments. Must-Have: ✅4–6+ years in Data Engineering. ✅Hands-on experience in Python, SQL, PySpark, and AWS services. ✅Solid Databricks expertise. ✅Experience with DevOps tools: Git, Jenkins, GitHub Actions. ✅Understanding of data lake/lakehouse/warehouse architectures. Good to Have: ✅AWS/Databricks certifications. ✅Experience with data observability tools (Monte Carlo, Datadog). ✅Exposure to regulated domains like Healthcare or Finance. ✅Familiarity with streaming tools (Kafka, Kinesis, Spark Streaming). ✅Knowledge of modern data concepts (Data Mesh, Data Fabric). ✅Experience with visualization tools: Power BI, Tableau, QuickSight.
Posted 1 week ago
4.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Mindera At Mindera , we craft software with people we love. We're a collaborative, global team of engineers who value open communication, great code, and building impactful products. We're looking for a talented C#/.NET Developer to join our growing team in Gurugram and help us build scalable, high-quality software systems. Requirements What You'll Do Build, maintain, and scale robust C#/.NET applications in a fast-paced Agile environment. Work closely with product owners and designers to bring features to life. Write clean, maintainable code following SOLID and OOP principles. Work with SQL/NoSQL databases, optimizing queries and schema designs. Collaborate in a Scrum or Kanban environment with engineers around the world. Use Git for version control and participate in code reviews. Contribute to our CI/CD pipelines and automated testing workflows. Must-Have Skills What We're Looking For 4-9 years of hands-on experience with C# and .NET technologies. Solid understanding of Object-Oriented Programming (OOP) and clean code principles. Proven experience working with databases (SQL or NoSQL). Experience in an Agile team (Scrum/Kanban). Familiarity with Git and collaborative development practices. Exposure to CI/CD pipelines and test automation. Nice-to-Have Skills Experience with Rust (even hobbyist experience is valued). Background working with Python or Scala for Spark-based applications. Hands-on with Docker and container-based architecture. Familiarity with Kubernetes for orchestration. Experience working with Apache Airflow for data workflows. Cloud experience with Google Cloud Platform (GCP) or Microsoft Azure. Benefits We Offer Flexible working hours (self-managed) Competitive salary Annual bonus, subject to company performance Access to Udemy online training and opportunities to learn and grow within the role About Mindera At Mindera we use technology to build products we are proud of, with people we love. Software Engineering Applications, including Web and Mobile, are at the core of what we do at Mindera. We partner with our clients, to understand their product and deliver high performance, resilient and scalable software systems that create an impact in their users and businesses across the world. You get to work with a bunch of great people, where the whole team owns the project together. Our culture reflects our lean and self-organisation attitude. We encourage our colleagues to take risks, make decisions, work in a collaborative way and talk to everyone to enhance communication. We are proud of our work and we love to learn all and everything while navigating through an Agile, Lean and collaborative environment. Follow our Linkedln page - https://tinyurl.com/minderaindia Check ot our Blog: http://mindera.com/ and our Handbook: http://bit.ly/MinderaHandbook Our offices are located: Aveiro, Portugal | Porto, Portugal | Leicester, UK | San Diego, USA | San Francisco, USA | Chennai, India | Bengaluru, India
Posted 1 week ago
0.0 - 6.0 years
0 - 0 Lacs
Haryana, Haryana
On-site
Job Overview We are seeking a skilled and detail-oriented HVAC Engineer with experience in cleanroom HVAC systems, including ducting, mechanical piping, and sheet metal works. The ideal candidate will assist in site execution, technical coordination, and quality assurance in line with cleanroom standards for pharmaceutical, biotech, or industrial facilities. Key Responsibilities : Support end-to-end HVAC system execution, including ducting, AHU installation, chilled water piping, and insulation. Supervise and coordinate day-to-day HVAC activities at the site in line with approved drawings and technical specifications. Review and interpret HVAC layouts, shop drawings, and coordination drawings for proper implementation. Ensure HVAC materials (ducts, dampers, diffusers, filters, etc.) meet project specifications and site requirements. Coordinate with other services (plumbing, electrical, BMS, fire-fighting) to ensure conflict-free execution. Monitor subcontractor work and labor force for compliance with timelines, quality, and safety standards. Assist in air balancing, testing & commissioning activities including HEPA filter installation and pressure validation. Conduct site surveys, measurements, and prepare daily/weekly progress reports. Maintain records for material movement, consumption, and inspection checklists. Work closely with the design and planning team to address technical issues and implement design revisions. Ensure cleanroom HVAC work complies with ISO 14644, GMP guidelines, and other regulatory standards. Required Skills & Qualifications : Diploma / B.Tech / B.E. in Mechanical Engineering or equivalent. 3–6 years of site execution experience in HVAC works, preferably in cleanroom or pharma/industrial MEP projects. Sound knowledge of duct fabrication, SMACNA standards, GI/SS materials, and cleanroom duct installation techniques. Hands-on experience with HVAC drawings, site measurement, and installation planning. Familiarity with testing procedures such as DOP/PAO testing, air balancing, and filter integrity testing. Proficient in AutoCAD, MS Excel, and basic computer applications. Good communication skills, site discipline, and teamwork. Desirable Attributes : Knowledge of cleanroom classifications and airflow management. Ability to manage vendors, material tracking, and basic troubleshooting. Familiar with safety practices and quality control procedures on site. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Overtime pay Ability to commute/relocate: Haryana, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Language: english (Preferred) Work Location: In person
Posted 1 week ago
3.0 years
15 - 20 Lacs
Madurai, Tamil Nadu
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Join our dynamic and high-impact Data team as a Data Engineer, where you'll be responsible for safely receiving and storing trading-related data for the India teams, as well as operating and improving our shared data access and data processing systems. This is a critical role in the organisation as the data platform drives a huge range of trader analysis, simulation, reporting and insights. The ideal candidate should have work experience in systems engineering, preferably with prior exposure to financial markets and with proven working knowledge in the fields of Linux administration, orchestration and automation tools, systems hardware architecture as well as storage and data protection technologies. Your Core Responsibilities: Manage and monitor all distributed systems, storage infrastructure, and data processing platforms, including HDFS, Kubernetes, Dremio, and in-house data pipelines Drive heavy focus on systems automation and CI/CD to enable rapid deployment of hardware and software solutions Collaborate closely with systems and network engineers, traders, and developers to support and troubleshoot their queries Stay up to date with the latest technology trends in the industry; propose, evaluate, and implement innovative solutions Your Skills and Experience: 5–7 years of experience in managing large-scale multi-petabyte data infrastructure in a similar role Advanced knowledge of Linux system administration and internals, with proven ability to troubleshoot issues in Linux environments Deep expertise in at least one of the following technologies: Kafka, Spark, Cassandra/Scylla, or HDFS Strong working knowledge of Docker, Kubernetes, and Helm Experience with data access technologies such as Dremio and Presto Familiarity with workflow orchestration tools like Airflow and Prefect Exposure to cloud platforms such as AWS, GCP, or Azure Proficiency with CI/CD pipelines and version control systems like Git Understanding of best practices in data security and compliance Demonstrated ability to solve problems proactively and creatively with a results-oriented mindset Quick learner with excellent troubleshooting skills High degree of flexibility and adaptability About Us IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, we’ve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed.
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role description Job Title: Digital Technologist (DevOps) Department: Information Technology Location: Lower Parel, Mumbai Who we are? Axis Asset Management Company Ltd (Axis AMC), founded in 2009, is one of India’s largest and fastest-growing mutual funds. We proudly serve over 1.3 crore customers across 100+ cities with utmost humility. Our success is built on three founding principles: • Long Term Wealth Creation • Customer-Centric Approach • Sustainable Relationships Our investment philosophy emphasizes risk management and encourages partners and investors to move from transactional investing to fulfilling critical life goals. We offer a diverse range of investment solutions to help customers achieve financial independence and a happier tomorrow. What will you Do? As a DevOps Lead, you will play a pivotal role in driving the automation, scalability, and reliability of our development and deployment processes. Key Responsibilities: 1. CI/CD Pipeline Development: Design, implement, and maintain robust CI/CD workflows using Jenkins, Azure Repos, Docker, and PySpark. Ensure seamless integration with AWS services such as Airflow and EKS. 2. Cloud & Infrastructure Management: Architect and manage scalable, fault-tolerant, and cost-effective cloud solutions using AWS services including EC2, RDS, EKS, DynamoDB, Secret Manager, Control Tower, Transit Gateway, and VPC. 3. Security & Compliance: Implement security best practices across the DevOps lifecycle. Utilize tools like SonarQube, Checkmarx, Trivy, and AWS Inspector to ensure secure application deployments. Manage IAM roles, policies, and service control policies (SCPs). 4. Containerization & Orchestration: Lead container lifecycle management using Docker, Amazon ECS, EKS, and AWS Fargate. Implement orchestration strategies including blue-green deployments, Ingress controllers, and ArgoCD. 5. Frontend & Backend CI/CD: Build and manage CI/CD pipelines for frontend applications (Node.js, Angular, React) and backend microservices (Spring Boot) using tools like Maven and Nexus/Azure Artifacts. 6. Infrastructure as Code (IaC): Develop and maintain infrastructure using Terraform or AWS CloudFormation to support repeatable and scalable deployments. 7. Scripting & Automation: Write and maintain automation scripts in Python, Groovy, and Shell/Bash for deployment, monitoring, and system management tasks. 8. Version Control & Artifact Management: Manage source code and artifacts using Git, Azure Repos, Nexus, and Azure Artifacts. 9. Disaster Recovery & High Availability: Design and implement disaster recovery strategies, multi-AZ, and multi-region architectures to ensure business continuity. 10. Collaboration & Leadership: Work closely with development, QA, and operations teams to streamline workflows and mentor junior team members in DevOps best practices.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About Holcim Holcim is the leading partner for sustainable construction, creating value across the built environment from infrastructure and industry to buildings. We offer high-value end-to-end Building Materials and Building Solutions - from foundations and flooring to roofing and walling - powered by premium brands including ECOPlanet, ECOPact and ECOCycle®. More than 45,000 talented Holcim employees in 45 attractive markets - across Europe, Latin America and Asia, Middle East & Africa - are driven by our purpose to build progress for people and the planet, with sustainability and innovation at the core of everything we do. About The Role The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment. Education / Qualification BE / B. Tech from IIT or Tier I / II colleges Certification in Cloud Platforms AWS or GCP Experience Total Experience of 4-8years Hands on experience in python coding is must . Experience in data engineering which includes laudatory account Hands-on experience in Big Data cloud platforms like AWS(redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline. Experience in SQL, writing code in spark engine using python,pyspark.. Experience in data pipeline and workflow management tools ( such as Azkaban, Luigi, Airflow etc.) Key Personal Attributes Business focused, Customer & Service minded Strong Consultative and Management skills Good Communication and Interpersonal skills
Posted 1 week ago
9.0 - 15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title- Snowflake Data Architect Experience- 9 to 15 Years Location- Gurugram Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures. Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion, or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake. Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema, normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake: Architecture design, performance tuning, cost optimization. Strong proficiency in SQL, Python, and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion, or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms: AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks. Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI, or Looker. Certifications (Preferred/Required): ✅ Snowflake SnowPro Core Certification – Required or Highly Preferred ✅ SnowPro Advanced Architect Certification – Preferred ✅ Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ✅ ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture
Posted 1 week ago
0.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Description Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools. Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance. Develop and enforce data engineering best practices, including coding standards, testing, and documentation. Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations. Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices. Qualifications Systems (MIS), Data Science or related field. 15 years of experience in data engineering and/or architecture, with a focus on big data technologies. Extensive production experience with Databricks, Apache Spark, and other related technologies. Familiarity with orchestration and ELT tools like Airflow, dbt, etc. Expert SQL knowledge. Proficiency in programming languages such as Python, Scala, or Java. Strong understanding of data warehousing concepts. Experience with cloud platforms such as Azure, AWS, Google Cloud. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to effectively mentor and guide Experience with machine learning and data science workflows Knowledge of data governance and security best practices Certification in Databricks, Azure, Google Cloud or related technologies. Job Engineering Primary Location India-Karnataka-Bengaluru Schedule: Full-time Travel: No Req ID: 252684 Job Hire Type Experienced Not Applicable #BMI N/A
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location Bengaluru, Karnataka, India Job ID R-232528 Date posted 28/07/2025 Job Title: Analyst – Data Engineer Introduction to role: Are you ready to make a difference in the world of data science and advanced analytics? As a Data Engineer within the Commercial Strategic Data Management team, you'll play a pivotal role in transforming data science solutions for the Rare Disease Unit. Your mission will be to craft, develop, and deploy data science solutions that have a real impact on patients' lives. By leveraging cutting-edge tools and technology, you'll enhance delivery performance and data engineering capabilities, creating a seamless platform for the Data Science team and driving business growth. Collaborate closely with the Data Science and Advanced Analytics team, US Commercial leadership, Sales Field Team, and Field Operations to build data science capabilities that meet commercial needs. Are you ready to take on this exciting challenge? Accountabilities: Collaborate with the Commercial Multi-functional team to find opportunities for using internal and external data to enhance business solutions. Work closely with business and advanced data science teams on cross-functional projects, delivering complex data science solutions that contribute to the Commercial Organization. Manage platforms and processes for complex projects using a wide range of data engineering techniques in advanced analytics. Prioritize business and information needs with management; translate business logic into technical requirements, such as creating queries, stored procedures, and scripts. Interpret data, process it, analyze results, present findings, and provide ongoing reports. Develop and implement databases, data collection systems, data analytics, and strategies that optimize data efficiency and quality. Acquire data from primary or secondary sources and maintain databases/data systems. Identify and define new process improvement opportunities. Manage and support data solutions in BAU scenarios, including data profiling, designing data flow, creating business alerts for fields, and query optimization for ML models. Essential Skills/Experience: BS/MS in a quantitative field (Computer Science, Data Science, Engineering, Information Systems, Economics) 5+ years of work experience with DB skills like Python, SQL, Snowflake, Amazon Redshift, MongoDB, Apache Spark, Apache Airflow, AWS cloud and Amazon S3 experience, Oracle, Teradata Good experience in Apache Spark or Talend Administration Center or AWS Lambda, MongoDB, Informatica, SQL Server Integration Services Experience in building ETL pipeline and data integration Build efficient Data Management (Extract, consolidate and store large datasets with improved data quality and consistency) Streamlined data transformation: Convert raw data into usable formats at scale, automate tasks, and apply business rules Good written and verbal skills to communicate complex methods and results to diverse audiences; willing to work in a cross-cultural environment Analytical mind with problem-solving inclination; proficiency in data manipulation, cleansing, and interpretation Experience in support and maintenance projects, including ticket handling and process improvement Setting up Workflow Orchestration (Schedule and manage data pipelines for smooth flow and automation) Importance of Scalability and Performance (handling large data volumes with optimized processing capabilities) Experience with Git Desirable Skills/Experience: Knowledge of distributed computing and Big Data Technologies like Hive, Spark, Scala, HDFS; use these technologies along with statistical tools like Python/R Experience working with HTTP requests/responses and API REST services Familiarity with data visualization tools like Tableau, Qlik, Power BI, Excel charts/reports Working knowledge of Salesforce/Veeva CRM, Data governance, and Data mining algorithms Hands-on experience with EHR, administrative claims, and laboratory data (e.g., Prognos, IQVIA, Komodo, Symphony claims data) Good experience in consulting, healthcare, or biopharmaceuticals When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find an environment where your work truly matters. Embrace the opportunity to grow and innovate within a rapidly expanding portfolio. Experience the entrepreneurial spirit of a leading biotech combined with the resources of a global pharma. You'll be part of an energizing culture where connections are built to explore new ideas. As a member of our commercial team, you'll meet the needs of under-served patients worldwide. With tailored development programs designed for skill enhancement and fostering empathy for patients' journeys, you'll align your growth with our mission. Supported by exceptional leaders and peers across marketing and compliance, you'll drive change with integrity in a culture celebrating diversity and innovation. Ready to make an impact? Apply now to join our team! Date Posted 29-Jul-2025 Closing Date 04-Aug-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.
Posted 1 week ago
10.0 - 14.0 years
35 - 45 Lacs
Hyderabad
Work from Office
About the Team At DAZN, the Analytics Engineering team is at the heart of turning hundreds of data points into meaningful insights that power strategic decisions across the business. From content strategy to product engagement, marketing optimization to revenue intelligence we enable scalable, accurate, and accessible data for every team. The Role We're looking for a Lead Analytics Engineer to take ownership of our analytics data Pipeline and play a pivotal role in designing and scaling our modern data stack. This is a hands-on technical leadership role where you'll shape the data models in dbt/ Snowflake , orchestrate pipelines using Airflow , and enable high-quality, trusted data for reporting. Key Responsibilities Lead the development and governance of DAZNs semantic data models to support consistent, reusable reporting metrics. Architect efficient, scalable data transformations on Snowflake using SQL/DBT and best practices in data warehousing. Manage and enhance pipeline orchestration with Airflow , ensuring timely and reliable data delivery. Collaborate with stakeholders across Product, Finance, Marketing, and Technology to translate requirements into robust data models. Define and drive best practices in version control, testing, CI/CD for analytics workflows. Mentor and support junior engineers, fostering a culture of technical excellence and continuous improvement. Champion data quality, documentation, and observability across the analytics layer. You'll Need to Have 10+ years of experience in data/analytics engineering, with 2+ years leading or mentoring engineers . Deep expertise in SQL and cloud data warehouses (preferably Snowflake ) and Cloud Services(AWS /GCP/AZURE) Proven experience with dbt for data modeling and transformation. Hands-on experience with Airflow (or similar orchestrators like Prefect, Luigi). Strong understanding of dimensional modeling, ELT best practices, and data governance principles. Ability to balance hands-on development with leadership and stakeholder management. Clear communication skills you can explain technical concepts to both technical and non-technical teams. Nice to Have Experience in the media, OTT, or sports tech domain. Familiarity with BI tools like Looker or PowerBI. Exposure to testing frameworks like dbt tests or Great Expectations
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Overview About this role We are looking for an innovative hands-on technology leader and run Global Data Operations for one of the largest global FinTech’s. This is a new role that will transform how we manage and process high quality data at scale and reflects our commitment to invest in an Enterprise Data Platform to unlock our data strategy for BlackRock and our Aladdin Client Community. A technology first mindset, to manage and run a modern global data operations function with high levels of automation and engineering, is essential. This role requires a deep understanding of data, domains, and the associated controls. Key Responsibilities The ideal candidate will be a high-energy, technology and data driven individual who has a track record of leading and doing the day to day operations. Ensure on time high quality data delivery with a single pane of glass for data pipeline observability and support Live and breathe best practices of data ops such as culture, processes and technology Partner cross-functionally to enhance existing data sets, eliminating manual inputs and ensuring high quality, and onboarding new data sets Lead change while ensuring daily operational excellence, quality, and control Build and maintain deep alignment with key internal partners on ops tooling and engineering Foster an agile collaborative culture which is creative open, supportive, and dynamic Knowledge And Experience 8+ years’ experience in hands-on data operations including data pipeline monitoring and engineering Technical expert including experience with data processing, orchestration (Airflow) data ingestion, cloud-based databases/warehousing (Snowflake) and business intelligence tools The ability to operate and monitor large data sets through the data lifecycle, including the tooling and observability required to be ensure data quality and control at scale Experience implementing, monitoring, and operating data pipelines that are fast, scalable, reliable, and accurate Understanding of modern-day data highways, the associated challenges, and effective controls Passionate about data platforms, data quality and everything data Practical and detailed oriented operations leader Inquisitive leader who will bring new ideas that challenge the status quo Ability to navigate a large, highly matrixed organization Strong presence with clients Bachelor’s Degree in Computer Science, Engineering, Mathematics or Statistics Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
6.0 - 7.0 years
15 - 17 Lacs
India
On-site
About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure
Posted 1 week ago
7.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure
Posted 1 week ago
6.0 - 11.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Payroll Company: Compunnel INC Client: Infosys, after 6 months you will work directly with Infosys Experience Required: 6+ years Mode of Work: 5 days of Work from the office Location: Bangalore, Hyderabad, Trivandrum, Chennai, Pune, Chandigarh, Jaipur, Mangalore Job Title: Python Developer Python Primary skill. 6 years of experience. Added and necessary: Airflow, Kubernetes, ELK, Flask / Django / Fast API Almost very important Gen AI experience will be a big plus Experience with DB-to-DB migration, especially MS SQL to any open-source db (Spark / Click House Experience moving code from Java to Python or within Python from 1 framework to another. Please fill in all the essential details which are given below & attach your updated resume, and send it to ralish.sharma@compunnel.com 1. Total Experience: 2. Relevant Experience in Python Development : 3. Experience in Airflow : 4. Experience in Kubernetes : 5. Experience in ELK : 6. Experience in Flask/Django/Fast API : 7. Experience in Gen AI : 8. Current company : 9. Current Designation : 10. Highest Education : 11. Notice Period: 12. Current CTC: 13. Expected CTC: 14. Current Location: 15. Preferred Location: 16. Hometown: 17. Contact No: 18. If you have any offer from some other company, please mention the Offer amount and Offer Location: 19. Reason for looking for change: If the job description is suitable for you, please get in touch with me at the number below: 9910044363 .
Posted 1 week ago
3.0 years
4 Lacs
Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 1 week ago
3.0 years
3 - 6 Lacs
Chennai
On-site
ROLE SUMMARY At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! ROLE RESPONSIBILITIES The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings BASIC QUALIFICATIONS Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time PREFERRED QUALIFICATIONS Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes PHYSICAL/MENTAL REQUIREMENTS None NON-STANDARD WORK SCHEDULE, TRAVEL OR ENVIRONMENT REQUIREMENTS Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech #LI-PFE
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About SAIGroup SAIGroup is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $800 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAIGroup invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAIGroup’s latest investment, JazzX AI , is a pioneering technology company on a mission to shape the future of work through an AGI platform purpose-built for the enterprise. JazzX AI is not just building another AI tool—it’s reimagining business processes from the ground up, enabling seamless collaboration between humans and intelligent systems. The result is a dramatic leap in productivity, efficiency, and decision velocity, empowering enterprises to become pacesetters who lead their industries and set new benchmarks for innovation and excellence. Job Title: AGI Solutions Engineer (Junior) – GTM Solution Delivery (Full-time Remote-first with periodic travel to client sites & JazzX hubs) Role Overview As an Artificial General Intelligence Engineer you are the hands-on technical force that turns JazzX’s AGI platform into working, measurable solutions for customers. You will: Build and integrate LLM-driven features, vector search pipelines, and tool-calling agents into client environments. Collaborate with solution architects, product, and customer-success teams from discovery through production rollout. Contribute field learnings back to the core platform, accelerating time-to-value across all deployments. You are as comfortable writing production-quality Python as you are debugging Helm charts, and you enjoy explaining your design decisions to both peers and client engineers. Key Responsibilities Focus Area What You’ll Do Solution Implementation Develop and extend JazzX AGI services (LLM orchestration, retrieval-augmented generation, agents) within customer stacks. Integrate data sources, APIs, and auth controls; ensure solutions meet security and compliance requirements. Pair with Solution Architects on design reviews; own component-level decisions. Delivery Lifecycle Drive proofs-of-concept, pilots, and production rollouts with an agile, test-driven mindset. Create reusable deployment scripts (Terraform, Helm, CI/CD) and operational runbooks. Instrument services for observability (tracing, logging, metrics) and participate in on-call rotations. Collaboration & Support Work closely with product and research teams to validate new LLM techniques in real-world workloads. Troubleshoot customer issues, triage bugs, and deliver patches or performance optimisations. Share best practices through code reviews, internal demos, and technical workshops. Innovation & Continuous Learning Evaluate emerging frameworks (e.g., LlamaIndex, AutoGen, WASM inferencing) and pilot promising tools. Contribute to internal knowledge bases and GitHub templates that speed future projects. Qualifications Must-Have 2+ years of professional software engineering experience; 1+ years working with ML or data-intensive systems. Proficiency in Python (or Java/Go) with strong software-engineering fundamentals (testing, code reviews, CI/CD). Hands-on experience deploying containerised services on AWS, GCP, or Azure using Kubernetes & Helm. Practical knowledge of LLM / Gen-AI frameworks (LangChain, LlamaIndex, PyTorch, or TensorFlow) and vector databases. Familiarity integrating REST/GraphQL APIs, streaming platforms (Kafka), and SQL/NoSQL stores. Clear written and verbal communication skills; ability to collaborate with distributed teams. Willingness to travel 10–20 % for key customer engagements. Nice-to-Have Experience delivering RAG or agent-based AI solutions in regulated domains (finance, healthcare, telecom). Cloud or Kubernetes certifications (AWS SA-Assoc/Pro, CKA, CKAD). Exposure to MLOps stacks (Kubeflow, MLflow, Vertex AI) or data-engineering tooling (Airflow, dbt). Attributes Empathy & Ownership: You listen carefully to user needs and take full ownership of delivering great experiences. Startup Mentality: You move fast, learn quickly, and are comfortable wearing many hats. Detail-Oriented Builder: You care about the little things Mission-Driven: You want to solve important, high-impact problems that matter to real people. Team-Oriented: Low ego, collaborative, and excited to build alongside highly capable engineers, designers, and domain experts. Travel This position requires the ability to travel to client sites as needed for on-site deployments and collaboration. Travel is estimated at approximately 20–30% of the time (varying by project), and flexibility is expected to accommodate key client engagement activities. Why Join Us At JazzX AI, you have the opportunity to join the foundational team that is pushing the boundaries of what’s possible to create an autonomous intelligence driven future. We encourage our team to pursue bold ideas, foster continuous learning, and embrace the challenges and rewards that come with building something truly innovative. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering an unparalleled chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Engineering team does just that. Our engineering is where high-quality professional engineering meets individual impact. Our team creates products are built on a mature, cloud-native event-driven microservices architecture hosted in AWS. SailPoint is seeking a Backend Software Engineer to help build a new cloud-based SaaS identity analytics product. We are looking for well-rounded backend or full stack engineers who are passionate about building and delivering reliable, scalable microservices and infrastructure for SaaS products. As one of the first members on the team, you will be integral in building this product and will be part of an agile team that is in startup mode. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities Deliver efficient, maintainable data pipelines Deliver robust, bug free code Java based micro services Build and maintain Data Analytics and Machine Learning features Produce designs and rough estimates, and implement features based on product requirements. Collaborate with peers on designs, code reviews, and testing. Produce unit and end-to-end tests to improve code quality and maximize code coverage for new and existing features. Responsible for on-call production support Requirements 4+ years of professional software development experience Strong Python, SQL, Java experience Great communication skills BS in Computer Science, or a related field Comprehensive experience with object-oriented analysis and design skills Experience with Workflow engines Experience with Continuous Delivery, Source control Experience with Observability platforms for performance metrics collection and monitoring. Preferred Strong Experience in AirFlow, Snowflake, DBT Experience with ML Pipelines (SageMaker) Experience with Continuous Delivery Experience working on a Big Data/Machine Learning product Compensation and benefits Experience a Small-company Atmosphere with Big-company Benefits. Recharge your batteries with a flexible vacation policy and paid holidays. Grow with us with both technical and career growth opportunities. Enjoy a healthy work-life balance with flexible hours, family-friendly company events and charitable work. SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This job is with Pfizer, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. About Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you'll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer's responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor's degree or Master's degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
You are looking for a Lead Data Engineer with at least 7 years of experience, who is proficient in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. Your role will involve driving complex data solutions across various teams. It is essential that you have practical knowledge of data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. Your responsibilities will include leading projects, working collaboratively with different teams, and transforming business requirements into scalable data solutions following industry best practices in managed services or staff augmentation environments.,
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84960 Date: Jul 27, 2025 Location: Delhi Designation: Senior Consultant Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team As a member of the Operation, Industry and domain solutions team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals. We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change. Your work profile Devops Engineer Qualifications: B.E./ B. Tech./ MCA/ M.E./ M. Tech Required Experience: 10 years or more Desirable: Experience in Govt. IT Projects / Govt. Health IT Projects • Rich experience in analyzing enterprise application performance, determining roots cause, and optimizing resources up and down the stack Scaling Application Workloads in Linux VMware Demonstrates Technical Qualification Administering and utilizing Jenkins / Gitlab CI at scale for build managementand continuous integration Very Strong in Kubernetes, Envoy, Consul, Service mesh, API gateway. Substantial Knowledge of Monitoring tools like Zipkin, Kibana, Grafana, Prometheus, SonarQube. Strong in CI/CD experience. Relevant Experience in any cloud platform Creating Docker images and managing Docker Containers Scripting for configuration management. Experience in airflow ELK, dataflow for ETL. Good to have Infrastructure-as-code secrets management, deployment strategies, cloud networking. Familiarity with primitives like deployment and cron job. Scripting experience Supporting highly available open-source production applications and tools How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France