Home
Jobs

2873 Airflow Jobs - Page 43

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Experience - 5 to 8 years Location - Pune and Hyderabad Notice Period - Immediate to 30 days. Python and Java Development Proficiency in both languages is crucial for building Airflow Apache Airflow Experience in using Airflow for workflow automation including designing implementing and maintaining DAGs managing task dependencies and configuring scheduling and error handling Understanding of database systems data warehousing concepts and data architecture is essential for building robust data pipelines Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

interface.ai is the industry's-leading specialized AI provider for banks and credit unions, serving over 100 financial institutions. The company's integrated AI platform offers a unified banking experience through voice, chat, and employee-assisting solutions, enhanced by cutting-edge proprietary Generative AI. Our mission is clear: to transform the banking experience so every consumer enjoys hyper-personalized, secure, and seamless interactions, while improving operational efficiencies and driving revenue growth. interface.ai offers pre-trained, domain-specific AI solutions that are easy to integrate, scale, and manage, both in-branch and online. Combining this with deep industry expertise, interface.ai is the AI solution for banks and credit unions that want to deliver exceptional experiences and stay at the forefront of AI innovation. About Interface.ai interface.ai is the most advanced AI platform for financial institutions. We serve over 100 credit unions and community banks, enabling millions of intelligent conversations every day through voice, chat, and internal copilots. As a fast-growing, AI-native company, data is at the heart of how we build, measure, and scale our products. From intelligent conversation design to customer automation analytics, we apply machine learning and statistical modeling to deliver real-time, measurable outcomes. About The Role We are seeking a Senior Data Scientist to lead the development of scalable, production-grade models and analytics systems that power core platform that our Products run on This is a high-impact role where you'll work on problems at the intersection of language understanding, user behavior prediction, decision optimization, and platform-level intelligence . You will be embedded in product-driven teams, while also collaborating with infrastructure and research to shape the future of intelligence at interface.ai. Key Responsibilities Develop and deploy machine learning models for use cases like intent recognition, conversation scoring, outcome prediction, and next-best-action systems Design and run A/B and multivariate experiments to validate hypotheses and measure product impact Build real-time and batch inference pipelines in collaboration with engineering Define, instrument, and maintain data pipelines for user interaction modeling, longitudinal engagement, and behavioral segmentation Develop intelligence layers for customer-facing analytics products (e.g., AI explainability, task attribution, feature impact modeling) Partner with product managers, engineers, and UX teams to define data-informed product features Translate complex model outcomes into actionable insights for internal and external stakeholders Stay current with research and best practices in Voice models, decision modeling, time-series analysis, and agentic AI architectures What Success Looks Like Within your first 6–12 months, you will: Launch production-grade models that are actively used in product features or operations workflows Define and validate key behavioral or predictive models that influence roadmap direction Improve accuracy, performance, or interpretability of existing AI systems across voice and chat products Drive measurable lift in engagement, resolution rate, or automation through data-driven product iterations Collaborate across departments to establish trusted experimentation and measurement frameworks Required What You Bring 5+ years of experience in applied data science, including end-to-end model development and deployment Strong knowledge of Python, R, SQL, and experience with ML libraries such and deep learning frameworks. Experience with statistical testing, experiment design, and causal inference Understanding of production ML pipelines and collaboration with data engineering teams Experience with speech models, conversational systems, or classification models in user-facing applications Strong product thinking—able to translate model insights into product impact and roadmap trade-offs Preferred Experience working in B2C environments especially in regulated industries (e.g., financial services, healthcare) Exposure to retrieval-augmented generation (RAG), embedding-based search, or LLM evaluation frameworks Familiarity with tools like Airflow, MLflow, dbt, or feature stores Prior work in chatbots, IVRs, or user feedback systems Why Join Us Data science is central to our product innovation strategy You’ll have a direct, measurable impact on customer outcomes and platform intelligence You’ll work on real-world AI applications with scaled deployment and product visibility You’ll collaborate with a cross-disciplinary team of engineers, designers, and product leaders moving at startup speed At interface.ai, we are committed to providing an inclusive and welcoming environment for all employees and applicants. We celebrate diversity and believe it is critical to our success as a company. We do not discriminate on the basis of race, color, religion, national origin, age, sex, gender identity, gender expression, sexual orientation, marital status, veteran status, disability status, or any other legally protected status. All employment decisions at Interface.ai are based on business needs, job requirements, and individual qualifications. We strive to create a culture that values and respects each person's unique perspective and contributions. We encourage all qualified individuals to apply for employment opportunities with Interface.ai and are committed to ensuring that our hiring process is inclusive and accessible. Show more Show less

Posted 1 week ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Us We love going to work and think you should too. Our team is dedicated to trust, customer obsession, agility, and striving to be better everyday. These values serve as the foundation of our culture, guiding our actions and driving us towards excellence. We foster a culture of performance and recognition, allowing us to transform growth as we enable our employees to do the best work of their careers. This position is located in Pune. You'll be working in a major tech center of Pune, India. Across the globe, our Centers of Energy serve as hubs where we accelerate productivity and collaboration, inspire creativity, and cultivate a culture of connection and celebration. Our teams coordinate their time in Centers of Energy to reflect how they work best. To learn more about life at LogicMonitor, check out our Careers Page. What You'll Do The Data Scientist, CX will be part of the Customer Experience (CX) organization and help us analyze the health of our business through data and operational excellence. This role will play a key role in the team’s effort to proactively identify, develop and drive longer-term strategies and initiatives that deliver retention, growth & a superior experience for our Customers. The successful candidate will have relentless curiosity and possess a passion for creating innovative analysis to identify opportunities. Moreover, he/she/they will excel at extracting insights from data, converting ideas to action, and will feel comfortable interacting with senior management on a regular basis. You will be an integral part of the Customer Experience organization, contributing to the analysis of our business's health through data and operational excellence. Your role will be a key contributor to the Customer Experience (CX) organization, leveraging data science and analytics techniques to evaluate the health of our business and achieve operational excellence. This role will focus on applying data modeling techniques, predictive analytics, and statistical methods to uncover actionable insights that drive retention, growth, and an exceptional customer experience. The ideal candidate will have a strong technical background, exceptional analytical capabilities, and the ability to translate complex data into meaningful strategies. Here's a Closer Look At This Key Role Define Metrics & Dashboards: Collaborate with cross-functional teams to establish metrics, dashboards, and data models that yield actionable insights and drive business improvements. Develop Predictive Models: Design models that analyze customer behavior trends—such as adoption, churn, and retention—to support strategic decision-making.Implementing and validating predictive and prescriptive models and creating and maintaining statistical models & incorporating machine learning techniques in your projects Customize Analytical Tools: Enhance analytical efforts by tailoring tools (e.g., DBT workflows, dashboards) to identify business opportunities and optimize processes. Working in an Agile, collaborative environment, partnering with other teams, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicate Insights: Communicating with internal teams to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences. What You'll Need 4-5 years of experience in Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed solutions. Strong programming skills in Python, with experience in production-grade data science workflows. Hands-on experience with Scikit-learn, Pandas, NumPy, Matplotlib, Seaborn for modeling and exploratory data analysis. Strong understanding and practical experience with Supervised learning techniques and Unsupervised learning methods to segment customers or usage patterns. Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Familiarity with SQL and cloud-based data platforms (e.g., Snowflake, dbt, Airflow) is a plus. Click here to read our International Applicant Privacy Notice. LogicMonitor is an Equal Opportunity Employer At LogicMonitor, we believe that innovation thrives when every voice is heard and each individual is empowered to bring their unique perspective. We’re committed to creating a workplace where diversity is celebrated, and all employees feel inspired and supported to contribute their best. For us, equal opportunity means fostering a truly inclusive culture where everyone has the chance to grow and succeed. We don’t just open doors; we invite you to step through and be part of something bigger. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Our goal is to ensure an accessible and inclusive experience for every candidate. If you need a reasonable accommodation during the application or interview process under applicable local law, please submit a request via this Accommodation Request Form. Know your rights: workplace discrimination is illegal. Please click here to review LogicMonitor’s U.S. Pay Transparency Nondiscrimination Provision. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Job Description: Proficiency with Microsoft Excel, Access, PowerPoint, Qliksense and SQL required . Develop & Maintain Qlik Sense Solutions : Design, develop, and manage interactive dashboards, reports, and applications using Qlik Sense. Data Modeling & Governance : Build and maintain data models to ensure accuracy, consistency, and integrity in reporting. SQL Development : Write and troubleshoot complex SQL queries for data extraction, transformation, and analysis. Qlik Sense Administration : Manage Qlik Sense environments, ensuring optimal performance, security, and access control. Requirement Gathering : Work closely with business stakeholders to understand requirements and translate them into BI solutions. Automation & Reporting : Implement automated reporting solutions using NPrinting and alerting features to improve efficiency. Agile & Kanban Execution : Lead BI projects using Agile methodologies, ensuring timely delivery and iterative improvements. Training & Mentorship : Conduct user training sessions, support business teams in utilizing Qlik Sense effectively, and mentor junior analysts. Collaboration with Leadership : Engage with technical and business leaders to refine BI solutions and enhance data- driven : 3-6 years of experience in Qlik Sense development and administration. Expertise in Qlik Sense with a strong understanding of data visualization and BI best practices. Strong SQL skills for query development and troubleshooting. Deep understanding of data modeling, data governance, and data warehousing concepts. Experience working in Agile environments (Kanban preferred). Ability to gather business requirements and translate them into actionable BI solutions. Excellent problem-solving and analytical skills with an innovative mindset. Strong communication skills to collaborate with business and technical teams effectively . Qualifications: Bachelor’s degree in Computer Science , Information Technology, or a related field. 3-6 years of experience in QlikSense development and data visualization, preferably within the manufacturing sector. Strong proficiency in data modeling, scripting, and data integration within QlikSense . Experience with SQL and relational databases, particularly those related to manufacturing data. Solid understanding of data warehousing concepts and business intelligence tools. Excellent analytical and problem-solving skills, with the ability to translate procurement data into insights. Strong communication and interpersonal skills to work effectively with stakeholders in production, operations, and supply chain. Ability to manage multiple projects and deliver results within deadlines. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences MS Excel, Qliksense , SQL Preferred skill sets: ‘Good to have’ knowledge, skills and experiences Statistical analysis, SAP Analytics . Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role : Backend Engineer (5+ years) Duration : Full time (Hybrid) Location: Hyderabad About the Role: Senior Backend Engineer with strong Kafka expertise and a proven track record in Java, Flink, and Python. You will build scalable, high-performance backend services, optimize real-time data pipelines, and work with AWS cloud infrastructure. Key Responsibilities: Develop and maintain backend services using Java, Flink, Python, and Kafka. Build real-time streaming pipelines and event-driven architectures. Work with AWS services (PostgreSQL, Aurora, DynamoDB). Automate workflows with Airflow, monitor with New Relic & Splunk. Deploy and manage applications using Kubernetes & Docker. Optimize performance and troubleshoot distributed systems. Must-Have Qualifications: Highly skilled in backend development with strong Kafka expertise. Proficiency in Java, Flink, Python, and AWS cloud services. Experience with event-driven architectures and microservices. Experience with Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation. Hands-on experience with Airflow, New Relic, Splunk, Kubernetes, and Docker. Strong problem-solving skills and a DevOps mindset. Strong understanding of development operations, networking, security and automation. Ability to work in a fast-paced, collaborative environment. Strong problem-solving and communication skills. Strong expertise in Linux and Windows administration. Hands-on experience with cloud platforms (AWS, Azure, GCP). Proficiency in scripting and automation (Python, Bash, PowerShell, Terraform, Ansible, etc.). Experience with security tools (Nessus, Qualys, etc.) and vulnerability remediation. Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD). Knowledge of networking, firewalls, VPNs, and DNS management. Experience with log management, monitoring, and alerting systems. Strong troubleshooting and problem-solving skills. Tech Stack – Programming Languages: Java, Flink, Python Cloud & Databases: AWS (PostgreSQL, DynamoDB) Streaming & Messaging: Kafka Infrastructure-as-Code: Terraform Orchestration & Monitoring: Airflow, New Relic, Splunk Containerization & Deployment: Kubernetes, Docker About SinglepointSolutions: Founded in 2011 with a global presence, Single Point Solutions, a Digital Transformation and Technology provider specializes in Data, Mobile/Web, Cloud, AI, ML, and IoT technologies. Leveraging diverse industry knowledge and a collaborative approach, we deliver high-performing technology solutions. Our commitment to innovation empowers clients to achieve business goals and stay competitive in the digital landscape. Our Build, Operate, and Transfer (BOT) delivery model expertise caters to diverse industries. For more information, please visit https://www.singlepointsol.com . Single Point Solutions (SPS) is a Digital Transformation and Technology provider, with advanced technology solutions, including Mobile and Cloud App development, Artificial Intelligence & Machine Learning, Data, and Cloud infrastructure Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Big Data SME (Subject Matter Expert) Location : Remote Budget : Upto 28 LPA to 30 LPA Work Hours: UK Time (1:30 PM – 10:30 PM IST) Industry : Technology / IT Experience : 8 to 12 years in Data Engineering, Big Data, or related roles About the Role We are hiring a Senior Big Data Subject Matter Expert (SME) to support and guide ongoing cloud data initiatives, with a focus on mentorship, project support, and hands-on training in modern Big Data tools and technologies. This role is ideal for someone with deep technical experience who enjoys coaching teams, troubleshooting data platform issues, and enabling engineers to grow in real-world cloud projects. You’ll collaborate with engineers, architects, and leadership to ensure best practices in cloud data solutions and smooth delivery across projects. Key Responsibilities Provide technical support and guidance across Big Data platforms in Azure, AWS, or GCP. Train and mentor engineers on Big Data tools (Spark, Kafka, Hadoop, etc.). Assist project teams with architecture design, deployment, and debugging of data pipelines. Collaborate with cross-functional teams to ensure operational excellence and platform stability. Review and improve existing cloud data pipelines, focusing on performance, cost-efficiency, and scalability. Conduct regular knowledge-sharing sessions, workshops, and best practice walkthroughs. Help define and implement data governance, access control, and security frameworks. Technical Skills Required Cloud Platforms: Azure, AWS, GCP (at least 2 preferred) Big Data Tools: Apache Spark, Kafka, Hadoop, Flink ETL Tools: DBT, Apache Airflow, AWS Glue Data Warehousing: Snowflake, BigQuery, Redshift, Synapse Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) CI/CD & IaC: Terraform, GitHub Actions, Azure DevOps Security & Governance: IAM, RBAC, data encryption, lineage tracking Programming/Scripting: Python, Bash, PowerShell Preferred (Nice-to-Have) Exposure to Machine Learning pipelines and MLOps Experience with serverless computing (AWS Lambda, Azure Functions) Understanding of multi-cloud or hybrid-cloud architectures Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Hiring for AWS Data Engineer with Fast API Immediate Joiners Pune and Chennai location 7-10 years experience Share profiles to neha.sandurea@godoublu.com We are seeking a skilled and motivated AWS Data Engineer with expertise in FastAPI, Pub/Sub messaging systems, and Apache Airflow to build and maintain scalable, cloud-native applications on AWS. The ideal candidate has strong experience in modern Python development and is having strong hands-on experience with event-driven architectures and data workflow orchestration in AWS cloud environments. Required Qualifications: Bachelor’s degree in computer science, data science, or a related technical discipline. 7+ years of hands-on experience in data engineering, include developing ETL/ElT data pipeline, API Integration (Fast API Preferred), data platform/products and or data warehouse. 3+ years of hands-on experience in developing data-intensive solutions on AWS for operational and analytics workloads. 3+ years of experience in designing both ETL/ELT for batch processing and data streaming architectures for real-time or near real-time data ingestion and processing. 3+ years of experience in develop and orchestrate complex data workflows suing Apache Airflow (Mandatory), including DAG Authoring, scheduling, and monitoring. 2+ years of experience in building and managing event-driven microservices using Pub Sub systems (e.g. AWS SNS/SQL , Kafka) 3+ years of hands-on experience in two or more database technologies (e.g., MySQL, PostgreSQL, MongoDB) and data warehouses (e.g., Redshift, BigQuery, or Snowflake), as well as cloud-based data engineering technologies. Proficient in Dashboard/BI and Data visualization tools (eg. Tableau, Quicksight) Develop conceptual, logical, and physical data models using ERDs. Thrives in dynamic, cross-functional team environments. Possesses a team-first mindset, valuing diverse perspectives and contributing to a collaborative work culture. Approaches challenges with a positive and can-do attitude. Willing to challenge the status quo, demonstrating ability to understand when and how to take appropriate risks to drive performance. A passionate problem solver. High learning agility Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About The Role Duration: 6 months Location: Pune (Hybrid) Timings: Full Time (As per company timings) Notice Period: Immediate Joiner - Only Experience: 5-8 Years Technical Skills & Requirements Data-Oriented Solutions: Proven experience in designing, building, and operating data-oriented solutions in high-volume, transactional, global industries. Experience with AdTech is highly desirable. System Design & Problem Solving: Experience developing simple, scalable, and reliable architectures, operating concurrent and distributed systems, and solving complex or novel problems. Programming & Tools Expertise: Strong Proficiency In Languages: Python, JavaScript/TypeScript, Node.js Frameworks/Tools: Airflow/Composer Data Platforms: Kafka, Snowflake, BigQuery, Spark, Hadoop, AWS Athena, PostgreSQL, Redis Cloud Platforms: AWS and GCP Containerization: Docker and Kubernetes (preferred) SQL: Excellent development, query optimization, and data pipeline skills Algorithms & ML/AI: Proven experience with data structures and algorithms. Exposure to ML/AI solutions is highly desirable. Software Development Practices: Experience With Modern Development And Testing Practices, Including TDD, BDD, or ATDD Agile methodologies DevSecOps and Site Reliability Engineering (SRE) Continuous Integration / Continuous Delivery (CI/CD) Trunk-Based Development XP practices SaaS Product Development: Experience in SaaS product engineering and operations is a strong plus. Soft Skills & Communication: Strong written and spoken English Excellent communication, influencing, and documentation skills Resilience and the ability to thrive in ambiguous situations Passion for continuous learning and professional development Roles & Responsibilities Work as a member of an engineering team, collaborating with tech leads, product managers, designers, and data scientists. Design, build, and maintain simple, scalable, reliable, and secure solutions. Develop and deliver new features, maintain existing products, and help drive growth to achieve team KPIs. Use and advocate for modern engineering practices: TDD/BDD/ATDD, XP, QA Engineering, Trunk-Based Development, DevSecOps, CI/CD, and SRE. Contribute to the continuous improvement of engineering principles, tools, and practices. Mentor and support junior engineers, fostering a culture of continuous learning. Stay informed on AdTech industry trends, standards, competitor platforms, and commercial models. Combine technical expertise with market insights to influence strategy, product design, and roadmap planning. Skills: bigquery,design,javascript,agile methodologies,sql,ml,xp practices,tdd,python,airflow,kubernetes,redis,kafka,postgresql,aws,docker,site reliability engineering,continuous integration,continuous delivery,spark,data,gcp,composer,bdd,trunk-based development,devsecops,node.js,atdd,data structures,saas product development,aws athena,ai,typescript,algorithms,snowflake,hadoop,platforms Show more Show less

Posted 1 week ago

Apply

12.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: AWS Data Architect. Skills: Python, Airflow, AWS Services S3, Spark(Glue, EMR), Kafka (SQS, Event Bridge), Integration (AppFlow, APIs), DW Concepts & Data Modeling experience. Experience Required: 12 to 15 Years. Job Location: Hyderabad Only. We are Hiring AWS Data Architect at Coforge Ltd. We're looking for a highly skilled Lead Data Architect with a strong background in AWS, Python, and data engineering. You will lead a team of data engineers and architects, providing technical guidance and mentorship. Your expertise will shape our data strategy, ensuring efficient data processing, storage, and analytics. Key Responsibilities : • Lead a team of data engineers and architects, providing technical guidance and mentorship. Develop and execute a strategic roadmap for data processing, storage, and analytics in alignment with organizational goals. Candidate should possess a deep understanding of AWS cloud services, data architecture with a proven track record of leading data-driven projects to successful completion. • Design, implement, and maintain robust data pipelines using Python and Airflow, ensuring efficient data flow and transformation for analytical and operational purposes. • Utilize AWS services, including S3 for data storage, Glue and EMR for data processing, and orchestrate data workflows that are scalable, reliable, and secure. • Implement real-time data processing solutions using Kafka, SQS, and Event Bridge, addressing high-volume data ingestion and streaming needs. • Oversee the integration of diverse systems and data sources through AppFlow, APIs, and other integration tools, ensuring seamless data exchange and connectivity. • Lead the development of data warehousing solutions, applying best practices in data modelling to support efficient data storage, retrieval, and analysis. • Continuously monitor, optimize, and troubleshoot data pipelines and infrastructure, ensuring optimal performance and scalability. • Ensure adherence to data governance, privacy, and security policies, implementing measures to protect sensitive data and comply with regulatory requirements. Qualifications :- • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. • 10 to 15 years of experience in data engineering, with at least 3 years in a leadership role. • Proficient in Python programming and experience with Airflow for workflow management. • Strong expertise in AWS cloud services, particularly in data storage, processing, and analytics (S3, Glue, EMR, etc.). • Experience with real-time streaming technologies like Kafka, SQS, and Event Bridge. • Solid understanding of API based integrations and familiarity with integration tools such as AppFlow. • Deep knowledge of data warehousing concepts. Please share your CV to Gaurav.2.Kumar@coforge.com or WhatsApp 9667427662 for any queries. Show more Show less

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Job ID R-227531 Date posted 06/05/2025 Job Title: Senior Data Engineer Career Level: D3 Introduction to role: Are you ready to disrupt an industry and change lives? We are seeking a seasoned Senior Data Engineer to join our innovative team. With a focus on modern data tools and cloud platforms, you'll play a pivotal role in transforming our ability to develop life-changing medicines. If you have experience as a support engineer, you'll be well-equipped to tackle technical challenges head-on! Accountabilities: Develop, implement, and maintain data pipelines using technologies like Snowflake, DBT, and Fivetran. Automate and orchestrate workflows and data processes using Airflow. Develop scalable data infrastructure using AWS services (such as S3, RDS, and Lambda). Provide technical support and troubleshooting for data infrastructure challenges and incidents. Ensure high-quality data integration from diverse sources into Snowflake. Employ DBT to create reliable and efficient ETL processes. Utilize strong knowledge of data warehousing concepts to optimize data storage solutions. Implement efficient data storage and retrieval strategies to support business intelligence initiatives. Collaborate with analytics and business teams to address data requirements. Leverage reporting tools like PowerBI or OBIEE to provide insightful visualizations and reports. Essential Skills/Experience: 3-5 years of relevant experience in data engineering with hands-on expertise in: Snowflake DBT Airflow Strong proficiency in AWS services and infrastructure. Solid understanding of data warehousing concepts and data engineering practices. Experience with SQL and data modeling. Desirable Skills/Experience: Knowledge of Agile Fivetran Experience as a support engineer to enhance troubleshooting and resolution capabilities. Familiarity with reporting tools like PowerBI or MicroStrategy. Strong problem-solving skills and teamwork capabilities. Excellent communication and interpersonal skills. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients, empowering the business to perform at its peak by combining cutting-edge science with leading digital technology platforms. We are committed to driving cross-company change, creating new ways of working, and delivering exponential growth. Here, you can innovate, take ownership, and explore new solutions in a dynamic environment that encourages lifelong learning. Ready to make a meaningful impact? Apply now and be part of our journey! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. Senior Data Engineer Posted date Jun. 05, 2025 Contract type Full time Job ID R-227531 APPLY NOW Why choose AstraZeneca India? Help push the boundaries of science to deliver life-changing medicines to patients. After 45 years in India, we’re continuing to secure a future where everyone can access affordable, sustainable, innovative healthcare. The part you play in our business will be challenging, yet rewarding, requiring you to use your resilient, collaborative and diplomatic skillsets to make connections. The majority of your work will be field based, and will require you to be highly-organised, planning your monthly schedule, attending meetings and calls, as well as writing up reports. Who do we look for? Calling all tech innovators, ownership takers, challenge seekers and proactive collaborators. At AstraZeneca, breakthroughs born in the lab become transformative medicine for the world's most complex diseases. We empower people like you to push the boundaries of science, challenge convention, and unleash your entrepreneurial spirit. You'll embrace differences and take bold actions to drive the change needed to meet global healthcare and sustainability challenges. Here, diverse minds and bold disruptors can meaningfully impact the future of healthcare using cutting-edge technology. Whether you join us in Bengaluru or Chennai, you can make a tangible impact within a global biopharmaceutical company that invests in your future. Join a talented global team that's powering AstraZeneca to better serve patients every day. Success Profile Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for. 0% Tech innovators Make a greater impact through our digitally enabled enterprise. Use your skills in data and technology to transform and optimise our operations, helping us deliver meaningful work that changes lives. 0% Ownership takers If you're a self-aware self-starter who craves autonomy, AstraZeneca provides the perfect environment to take ownership and grow. Here, you'll feel empowered to lead and reach excellence at every level — with unrivalled support when you need it. 0% Challenge seekers Adapting and advancing our progress means constantly challenging the status quo. In this dynamic environment where everything we do has urgency and focus, you'll have the ability to show up, speak up and confidently take smart risks. 0% Proactive collaborators Your unique perspectives make our ambitions and capabilities possible. Our culture of sharing ideas, learning and improving together helps us consistently set the bar higher. As a proactive collaborator, you'll seek out ways to bring people together to achieve their best. Responsibilities Job ID R-227531 Date posted 06/05/2025 Job Title: Senior Data Engineer Career Level: D3 Introduction to role: Are you ready to disrupt an industry and change lives? We are seeking a seasoned Senior Data Engineer to join our innovative team. With a focus on modern data tools and cloud platforms, you'll play a pivotal role in transforming our ability to develop life-changing medicines. If you have experience as a support engineer, you'll be well-equipped to tackle technical challenges head-on! Accountabilities: Develop, implement, and maintain data pipelines using technologies like Snowflake, DBT, and Fivetran. Automate and orchestrate workflows and data processes using Airflow. Develop scalable data infrastructure using AWS services (such as S3, RDS, and Lambda). Provide technical support and troubleshooting for data infrastructure challenges and incidents. Ensure high-quality data integration from diverse sources into Snowflake. Employ DBT to create reliable and efficient ETL processes. Utilize strong knowledge of data warehousing concepts to optimize data storage solutions. Implement efficient data storage and retrieval strategies to support business intelligence initiatives. Collaborate with analytics and business teams to address data requirements. Leverage reporting tools like PowerBI or OBIEE to provide insightful visualizations and reports. Essential Skills/Experience: 3-5 years of relevant experience in data engineering with hands-on expertise in: Snowflake DBT Airflow Strong proficiency in AWS services and infrastructure. Solid understanding of data warehousing concepts and data engineering practices. Experience with SQL and data modeling. Desirable Skills/Experience: Knowledge of Agile Fivetran Experience as a support engineer to enhance troubleshooting and resolution capabilities. Familiarity with reporting tools like PowerBI or MicroStrategy. Strong problem-solving skills and teamwork capabilities. Excellent communication and interpersonal skills. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients, empowering the business to perform at its peak by combining cutting-edge science with leading digital technology platforms. We are committed to driving cross-company change, creating new ways of working, and delivering exponential growth. Here, you can innovate, take ownership, and explore new solutions in a dynamic environment that encourages lifelong learning. Ready to make a meaningful impact? Apply now and be part of our journey! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. APPLY NOW Explore the local area Take a look at the map to see what’s nearby. Reasons to Join Thomas Mathisen Sales Representative Oslo, Norway Christine Recchio Sales Representative California, United States Stephanie Ling Sales Representative Petaling Jaya, Malaysia What we offer We're driven by our shared values of serving people, society and the planet. Our people make this possible, which is why we prioritise diversity, safety, empowerment and collaboration. Discover what a career at AstraZeneca could mean for you. Lifelong learning Our development opportunities are second to none. You'll have the chance to grow your abilities, skills and knowledge constantly as you accelerate your career. From leadership projects and constructive coaching to overseas talent exchanges and global collaboration programmes, you'll never stand still. Autonomy and reward Experience the power of shaping your career how you want to. We are a high-performing learning organisation with autonomy over how we learn. Make big decisions, learn from your mistakes and continue growing — with performance-based rewards as part of the package. Health and wellbeing An energised work environment is only possible when our people have a healthy work-life balance and are supported for their individual needs. That's why we have a dedicated team to ensure your physical, financial and psychological wellbeing is a top priority. Inclusion and diversity Diversity and inclusion are embedded in everything we do. We're at our best and most creative when drawing on our different views, experiences and strengths. That's why we're committed to creating a workplace where everyone can thrive in a culture of respect, collaboration and innovation.

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years Location: Bengaluru, India, Karnataka JobType: full-time We are seeking a Big Data Engineer with deep technical expertise to join our fast-paced, data-driven team. In this role, you will be responsible for designing and building robust, scalable, and high-performance data pipelines that fuel real-time analytics, business intelligence, and machine learning applications across the organization. If you thrive on working with large datasets, cutting-edge technologies, and solving complex data engineering challenges, this is the opportunity for you. What You’ll Do Design & Build Pipelines : Develop efficient, reliable, and scalable data pipelines that process large volumes of structured and unstructured data using big data tools. Distributed Data Processing : Leverage the Hadoop ecosystem (HDFS, Hive, MapReduce) to manage and transform massive datasets. Starburst (Trino) Integration : Design and optimize federated queries using Starburst, enabling seamless access across diverse data platforms. Databricks Lakehouse Development : Utilize Spark, Delta Lake, and MLflow on the Databricks Lakehouse Platform to enable unified analytics and AI workloads. Data Modeling & Architecture : Work with stakeholders to translate business requirements into flexible, scalable data models and architecture. Performance & Optimization : Monitor, troubleshoot, and fine-tune pipeline performance to ensure efficiency, reliability, and data integrity. Security & Compliance : Implement and enforce best practices for data privacy, security, and compliance with global regulations like GDPR and CCPA. Collaboration : Partner with data scientists, product teams, and business users to deliver impactful data solutions and improve decision-making. What You Bring Must-Have Skills 5+ years of hands-on experience in big data engineering, data platform development, or similar roles. Strong experience with Hadoop , including HDFS, Hive, HBase, and MapReduce. Deep understanding and practical use of Starburst (Trino) or Presto for large-scale querying. Hands-on experience with Databricks Lakehouse Platform , Spark, and Delta Lake. Proficient in SQL and programming languages like Python or Scala . Strong knowledge of data warehousing, ETL/ELT workflows, and schema design. Familiarity with CI/CD tools, version control (Git), and workflow orchestration tools (Airflow or similar). Nice-to-Have Skills Experience with cloud environments such as AWS , Azure , or GCP . Exposure to Docker , Kubernetes , or infrastructure-as-code tools. Understanding of data governance and metadata management platforms. Experience supporting AI/ML initiatives with curated datasets and pipelines.

Posted 1 week ago

Apply

8.0 - 10.0 years

18 - 24 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Platform Engineer Experience: 8 - 10 years Contract Duration: 6 months Location: Bangalore / Chennai Job Type: Contract Job Overview: We are seeking an experienced Platform Data Engineer with a strong background in Python development, workflow automation with Airflow , containerization with Docker/Kubernetes , and solid expertise in observability, CI/CD, and cloud platforms. Key Responsibilities: Python Development : Design, develop, and maintain Python-based backend applications, tools, and services. Airflow Expertise : Utilize Apache Airflow 2.7+ to orchestrate and automate workflows. Configure custom Airflow operators, sensors, and hooks. Docker & Kubernetes : Develop containerized applications using Docker and manage container orchestration using Kubernetes . Experience with Helm charts for deploying applications. Observability & Monitoring : Implement robust monitoring and logging systems using tools such as ELK Stack (Elasticsearch, Logstash, Kibana) , Prometheus , and Grafana for real-time metrics and logging. CI/CD Automation : Design and implement continuous integration/continuous deployment (CI/CD) pipelines using Azure DevOps or other similar platforms. Ensure smooth deployment, integration, and delivery of software. Platform Infrastructure : Collaborate with cross-functional teams to design and implement scalable, secure, and highly available data infrastructure. Technical Skills & Experience: Python : Expertise in Python development, including writing efficient, modular, and scalable code. Airflow 2.7+ : In-depth experience using Apache Airflow for orchestrating data workflows, including knowledge of Airflow internals and the ability to create custom components. Docker & Kubernetes : Strong knowledge of Docker for containerization, and experience with Kubernetes for orchestration and scaling. Familiarity with Helm for deploying applications. Observability Tools : Expertise in monitoring, logging, and alerting with tools like ELK Stack (Elasticsearch, Logstash, Kibana) , Prometheus , and Grafana . CI/CD (Azure DevOps Preferred) : Proven experience in setting up and maintaining CI/CD pipelines using Azure DevOps or similar tools like Jenkins, GitLab CI, etc. Cloud & Infrastructure : Experience with cloud platforms like Azure , AWS , or GCP is a plus. Database Skills : Familiarity with working on SQL and NoSQL databases.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

About The Company Veersa is a healthtech company that leverages emerging technology and data science to solve business problems in the US healthcare industry. Veersa has established a niche in serving small and medium entities in the US healthcare space through its tech frameworks, platforms, and tech accelerators. Veersa is known for providing innovative solutions using technology and data science to its client base and is the preferred innovation partner to its clients. Veersa's rich technology expertise manifests in the various tech accelerators and frameworks developed in-house to assist in rapid solutions delivery and implementations. Its end-to-end data ingestion, curation, transformation, and augmentation framework has helped several clients quickly derive business insights and monetize data assets. Veersa teams work across all emerging technology areas such as AI/ML, IoT, and Blockchain and using tech stacks as MEAN, MERN, PYTHON, GoLang, ROR, and backend such as Java Springboot, NodeJs, and using databases as PostgreSQL, MS SQL, MySQL, Oracle on AWS and Azure cloud using serverless architecture. Veersa has two major business lines - Veersalabs: an In-house R&D and product development platform and Veersa tech consulting: Technical solutions delivery for clients. Veersa's customer base includes large US Healthcare software vendors, Pharmacy chains, Payers, providers, and Hospital chains. Though Veersa's focus geography is North America, Veersa also provides product engineering expertise to a few clients in Australia and Singapore. About The Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud. Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. We are specifically looking for someone with strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow. Build and automate data workflows and orchestration pipelines; knowledge of Airflow or equivalent tools is a plus. Write and maintain Bash scripts for automating system tasks and managing data jobs. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications Bachelor's or Master's degree in Computer Science, Information Systems, or related field. 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proficiency in SQL and Python for data processing and pipeline development. Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash. Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About The Company Veersa is a healthtech company that leverages emerging technology and data science to solve business problems in the US healthcare industry. Veersa has established a niche in serving small and medium entities in the US healthcare space through its tech frameworks, platforms, and tech accelerators. Veersa is known for providing innovative solutions using technology and data science to its client base and is the preferred innovation partner to its clients. Veersas rich technology expertise manifests in the various tech accelerators and frameworks developed in-house to assist in rapid solutions delivery and implementations. Its end-to-end data ingestion, curation, transformation, and augmentation framework has helped several clients quickly derive business insights and monetize data assets. Veersa teams work across all emerging technology areas such as AI/ML, IoT, and Blockchain and using tech stacks as MEAN, MERN, PYTHON, GoLang, ROR, and backend such as Java Springboot, NodeJs, and using databases as PostgreSQL, MS SQL, MySQL, Oracle on AWS and Azure cloud using serverless architecture. Veersa has two major business lines Veersalabs : an In-house R&D and product development platform and Veersa tech consulting : Technical solutions delivery for clients. Veersas customer base includes large US Healthcare software vendors, Pharmacy chains, Payers, providers, and Hospital chains. Though Veersas focus geography is North America, Veersa also provides product engineering expertise to a few clients in Australia and Singapore. About Job Position: SE/ Senior Data Engineer (with SQL, Python, Airflow, Bash) About The Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud. Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. We are specifically looking for someone with strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow. Build and automate data workflows and orchestration pipelines; knowledge of Airflow or equivalent tools is a plus. Write and maintain Bash scripts for automating system tasks and managing data jobs. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications Must have B.Tech or B.E degree in Computer Science, Information Systems, or any related field. 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proficiency in SQL and Python for data processing and pipeline development. Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash. Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and : Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect,design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to : SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability have : Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions. Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub to have : Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About The Role We are seeking a highly skilled Project Manager to lead the design, construction, and commissioning of a new hyperscale/enterprise data center in Pune. The ideal candidate will have hands-on experience in data center construction , with deep expertise in mechanical, electrical, and cooling infrastructure . This role requires strong technical knowledge, stakeholder management, and execution excellence to ensure the project is delivered on time, within budget, and to global standards . The candidate must be comfortable working with US leadership, local contractors, and engineering teams in a fast-paced environment. Key Responsibilities Project Planning & Execution Lead end-to-end project management for data center construction, including civil works, MEP (Mechanical, Electrical, Plumbing), and IT infrastructure fit-out. Develop detailed project plans, including timelines, budgets, risk assessments, and resource allocation. Ensure compliance with Uptime Institute Tier Standards, ASHRAE guidelines, and local regulations. Coordinate with architects, contractors, vendors, and internal teams to ensure seamless execution. Technical Oversight Mechanical Systems: Oversee HVAC, chilled water systems, CRAC/CRAH units, and airflow management. Ensure proper hot/cold aisle containment, underfloor airflow, and liquid cooling (direct-to-chip, immersion cooling) implementation. Electrical Systems: Manage power distribution (HV/LV), UPS systems, generators, PDUs, and backup power solutions. Ensure redundancy (N+1, 2N) and energy efficiency. Cabling & Rack Layouts: Supervise structured cabling (fiber/copper), cable tray routing, and rack power distribution. Optimize space utilization, thermal management, and scalability. Stakeholder & Vendor Management Serve as the primary point of contact between US leadership, India operations, and local contractors. Conduct weekly progress reviews, risk assessments, and budget tracking. Manage RFPs, contractor selection, and SLA compliance. Quality, Safety & Compliance Enforce safety protocols (OSHA/NFPA standards) and ensure zero critical incidents. Conduct quality inspections, testing, and commissioning of all systems. Ensure sustainability compliance (LEED, energy-efficient cooling, carbon footprint reduction). Reporting & Documentation Maintain real-time dashboards for project health (schedule, budget, risks). Provide executive updates to US leadership with data-driven insights. Ensure as-built drawings, manuals, and handover documentation are properly archived. Required Skills & Qualifications Technical Expertise Must-Have: 5-10 years of data center construction/fit-out experience. Deep knowledge of grey space mechanical systems (cooling, airflow, containment). Expertise in electrical infrastructure (UPS, generators, power distribution). Familiarity with liquid cooling (direct-to-chip, immersion cooling). Understanding of BMS (Building Management Systems) and DCIM tools. Good-to-Have: Knowledge of hyperscale data center standards (Google, AWS, Microsoft). Experience with modular/prefabricated data center deployments. Project Management Skills PMP/PRINCE2 certification (preferred). Proficiency in MS Project, JIRA, AutoCAD, or BIM tools. Strong risk management, budgeting, and scheduling skills. Soft Skills Excellent communication (verbal & written) for US stakeholder management. Problem-solving mindset with a strong work ethic and "can-do" attitude. Ability to work under tight deadlines in a fast-paced environment. Why Join Us? Lead a cutting-edge, high-impact data center project. Work with global leaders in data center infrastructure. Competitive salary, performance bonuses, and career growth opportunities. How To Apply If you are a technically strong Project Manager with a passion for data center construction , submit your resume with: Examples of past data center projects you’ve managed. Key achievements (cost savings, time efficiency, innovation). Skills: ms project,jira,skills,compliance,project management (pmp/prince2),risk management,problem-solving,data,data center construction,bim tools,data center,cooling systems,mechanical systems,infrastructure,airflow,construction,stakeholder management,electrical infrastructure,autocad,communication Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 10 Lacs

Gurugram

Work from Office

Naukri logo

Public Services Industry Strategist Join our team in Strategy for an exciting career opportunity to work on the Industry Strategy agenda of our most strategic clients across the globe! Practice: Industry Strategy, Global Network (GN) Areas of Work: Strategy experience in Public Services Industry " Operating Model and Organization Design, Strategic Roadmap Design, Citizen Experience, Business Case Development (incl Financial Modelling), Transformation office, Sustainability, Digital Strategy, Data Strategy, Gen AI, Cloud strategy, Cost Optimization strategy Domain:Public Services " Social Services, Education, Global Critical Infrastructure Services, Revenue, Post & Parcel Level: Consultant Location: Gurgaon, Mumbai, Bengaluru, Chennai, Kolkata, Hyderabad & Pune Years of Exp: 4-7 years of strategy experience post MBA from a Tier 1 institute Explore an Exciting Career at Accenture Do you believe in creating an impact? Are you a problem solver who enjoys working on transformative strategies for global clients? Are you passionate about being part of an inclusive, diverse, and collaborative culture? Then, this is the right place for you! Welcome to a host of exciting global opportunities in Accenture Strategy. The Practice- A Brief Sketch: The GNStrategy Industry Group is a part of Accenture Strategy and focuses on the CXOs most strategic priorities. We help clients with strategies that are at the intersection of business and technology, drive value and impact, shape new businesses & design operating models for the future. As a part of this high performing team, you will: Apply advanced corporate finance to drive value using financial levers, value case shaping, and feasibility studies to evaluate new business opportunities Analyze competitive benchmarking to advise C-suite on 360 value opportunities, scenario planning to solve complex C-suite questions, lead & enable strategic conversations Identify strategic cost take-out opportunities, drive business transformation, and suggest value-based decisions based on insights from data. Apply advanced data analyses to unlock client value aligned with clients business strategy Build future focused PoV and develop strategic ecosystem partners. Build Client Strategy definition leveraging Disruptive technology solutions, like Data & AI, including Gen AI, and Cloud Build relationships with C-suite executives and be a trusted advisor enabling clients to realize value of human-centered change Create Thought Leadership in Industry/Functional areas, Reinvention Agendas, Solution tablets and assets for value definition, and use it, along with your understanding of Industry value chain and macroeconomic analyses, to inform clients strategy Partner with CXOs to architect future proof operating models embracing Future of Work, Workforce and Workplace powered by transformational technology, ecosystems and analytics Work with our ecosystem partners, help clients reach their sustainability goals through digital transformation Prepare and deliver presentations to clients to communicate strategic plans and recommendations on PS domains such as Digital Citizen, Public Infrastructure, Smart Buildings, Net Zero Monitor industry trends and keep clients informed of potential opportunities and threats The candidate will be required to have exposure to core-strategy projects in Public Services domain with a focus on one of the sub-industries within the Public Service (mentioned below), specifically: Public Service Experience: The candidate must have strategy experience in at least one of the below Public Service sub-industries: Social Services + (Employment, Pensions, Education, Child welfare, Government as a platform, Digital Citizen Services) Education Global Critical Infrastructure Services (Urban & city planning, Smart Cities, High Performing City Operating Model) Admin (Citizen experience, Federal Funds Strategy, Workforce Strategy, Intelligent Back Office, Revenue industry strategy, Post & Parcel) Strategy Skills and Mindsets Expected: A Strategic Mindset to shape innovative, fact-based strategies and operating models Communication and Presentation Skills to hold C-Suite influential dialogues, narratives, conversations, and share ideas Ability to solve problems in unstructured scenarios, to decode and solve complex and unstructured business questions Analytical and outcome-driven approach to perform data analyses & generate insights, and application of these insights for strategic insights and outcomes Qualifications Value Driven Business Acumen to drive actionable outcomes for clients with the latest industry trends, innovations and disruptions, metrics and value drivers Financial Acumen and Value Creation to develop relevant financial models to back up a business case Articulation of strategic and future vision Ability to identify Technology Disruptions in the Public Services industry What's in it for you? An opportunity to work on transformative projects with key G20OO clients and CxOs Potential to co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all Engage in boundaryless collaboration across the entire organization About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions underpinned by the world's largest delivery network Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With more than 732,000 p eople serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at About Accenture Strategy & Consulting: Accenture Strategy shapes our clients' future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our Global Network a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Global Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world. For more information visit en /careers/local/capability-network- careers Accenture Global Network | AGcenture in One Word At the heart of every great change is a great human. If you have ideas, ingenuity and a passion for making a difference, come and be a part of our team .

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 1 week ago

Apply

6.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Designs, implements and maintains reliable and scalable data infrastructure Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data Develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes Works with customers to deploy, manage, and audit standard processes for cloud products Adheres to and advocates for software & data engineering standard processes (e.g. Data Engineering pipelines, unit testing, monitoring, alerting, source control, code review & documentation) Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. Part of a cross-disciplinary team working closely with other data engineers, Architects, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Mandatory Skill Sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark, spark-SQL, Preferred Skill Sets: ‘Good to have’ knowledge, skills and experiences Cosmos DB, Data modeling, Databricks, PowerBI, experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years Of Experience Required: 6 to 9 years relevant experience Education Qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Expected Joining: 3 weeks Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Hyderabad, Pune

Hybrid

Naukri logo

Role 1 - GCP Data Engineer Must have skills /Mandatory Skills GCP, Big Query, Dataflow, Cloud Composer Role 2. Big Data Engineer Must have skills /Mandatory Skills -Big Data, PySpark, Scala, Python Role 3 - GCP DevOps Engineer Must have skills /Mandatory Skills – GCP DevOps Experience Range – 5+ Years Location – Only Pune & Hyderabad, If you are applying from outside Pune or Hyd, then you have to relocate to Pune or Hyd . Work Mode – Min 2 days are mandatory to work from home. Salary – 12-16 LPA Point to be remember Pls fill the Candidate Summary Sheet. Not Considering more than 30 days’ notice period. Highlights of this role. It’s a long term role. High Possibility of conversion within 6 Months or After 6 months ( if you perform well). Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai. UAN Verification will be done in Background Check. Any overlapping in past employment will eliminate you. Last 4 Years Continuous PF deduction is mandatory. One face to face meeting is mandatory, Otherwise we can’t onboard you. Client Company – One of Leading Technology Consulting Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east. Do not change the subject line or do not create new email while sharing /applying for this position. Pls reply on this email thread only. Role 1 - GCP Data Engineer Must have skills /Mandatory Skills – GCP, Big Query, Dataflow, Cloud Composer About the Role: We are seeking a highly skilled and passionate GCP Data Engineer to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining scalable and robust data pipelines and solutions on Google Cloud Platform (GCP). You will work closely with data scientists, analysts, and other stakeholders to translate business requirements into efficient data architectures, enabling data-driven decision-making across the organization. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. Min 5+ years of experience (e.g., 3-8 years) as a Data Engineer, with a strong focus on Google Cloud Platform (GCP). Mandatory hands-on experience with core GCP data services: BigQuery (advanced SQL, data modeling, query optimization) Dataflow (Apache Beam, Python/Java SDK) Cloud Composer / Apache Airflow for workflow orchestration Cloud Storage (GCS) Cloud Pub/Sub for messaging/streaming Strong programming skills in Python (preferred) or Java/Scala for data manipulation and pipeline development. Proficiency in SQL and experience with relational and NoSQL databases. Experience with data warehousing concepts, ETL/ELT processes, and data modeling techniques. Understanding of distributed systems and big data technologies (e.g., Spark, Hadoop concepts, Kafka). Familiarity with CI/CD practices and tools. Role 2. Big Data Engineer Must have skills /Mandatory Skills -Big Data, PySpark, Scala, Python About the Role: We are looking for an experienced and passionate Big Data Engineer to join our dynamic team. In this role, you will be responsible for designing, building, and maintaining scalable, high-performance data processing systems and pipelines capable of handling vast volumes of structured and unstructured data. You will play a crucial role in enabling our data scientists, analysts, and business teams to derive actionable insights from complex datasets. Qualifications : bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Min 5 years of proven experience as a Big Data Engineer or a similar role. Extensive hands-on experience with Apache Spark (PySpark, Scala) for data processing. Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce). Proficiency in Python and/or Scala/Java. Solid SQL skills and experience with relational databases. Experience designing and building complex ETL/ELT pipelines. Familiarity with data warehousing concepts and data modeling techniques (star schema, snowflake, data vault). Understanding of distributed computing principles. Excellent problem-solving, analytical, and communication skills Role 3r - GCP DevOps Engineer Must have skills /Mandatory Skills – GCP DevOps We are seeking a highly motivated and experienced GCP DevOps Engineer to join our innovative engineering team. You will be responsible for designing, implementing, and maintaining robust, scalable, and secure cloud infrastructure and automation pipelines on Google Cloud Platform (GCP). This role involves working closely with development, operations, and QA teams to streamline the software delivery lifecycle, enhance system reliability, and promote a culture of continuous improvement. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 5 years of experience in a DevOps or SRE role, with significant hands-on experience on Google Cloud Platform (GCP). Strong expertise in core GCP services relevant to DevOps: Compute Engine, GKE, Cloud SQL, Cloud Storage, VPC, Cloud Load Balancing, IAM. Proficiency with Infrastructure as Code (IaC) tools, especially Terraform. Extensive experience in designing and implementing CI/CD pipelines using tools like Cloud Build, Jenkins, or GitLab CI. Hands-on experience with containerization (Docker) and container orchestration (Kubernetes/GKE). Strong scripting skills in Python and Bash/Shell. Experience with monitoring and logging tools (Cloud Monitoring, Prometheus, Grafana, ELK stack). Solid understanding of networking concepts (TCP/IP, DNS, Load Balancers, VPNs) in a cloud environment. Familiarity with database concepts and experience managing cloud databases (e.g., Cloud SQL, Firestore). *** Mandatory to share ***Candidate Summary Sheet*** Interested parties can share their resume at (shant@harel-consulting.com) along with below details Applying for which role ( Pls mention the role name)- Your Name – Contact NO – Email ID – Do you have valid passport – Total Experience – Role 1 . Experience in GCP - Experience in Big Query - Experience in Data Flow - Experience in Cloud Composer – Experience in Apache Airflow – Experience in Python OR Java OR Scala and how much – Role 2nd Experience in Big data- Experience in Hive – Experience in Python OR Java OR Scala and how much – Experience in Pyspark- Role 3. Experience in GCP Devops – Experience in Python Current CTC – Expected CTC – What is your notice period in your current Company- Are you currently working or not- If not working then when you have left your last company – Current location – Preferred Location – It’s a Contract to Hire role, Are you ok with that- Highest Qualification – Current Employer (Payroll Company Name) Previous Employer (Payroll Company Name)- 2nd Previous Employer (Payroll Company Name) – 3rd Previous Employer (Payroll Company Name)- Are you holding any Offer – Are you Expecting any offer - Are you open to consider Contract to Hire role , It’s a C2H Role- PF Deduction is happening in Current Company – PF Deduction happened in 2nd last Employer- PF Deduction happened in 3 last Employer – Latest Photo - If incase you are working with a company whose employee strength is less than 2000 employees, than its mandatory to share UAN Service history. BR Shantpriya Harel Consulting shant@harel-consulting.com 9582718094

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Titan At Titan, we’re redefining email for entrepreneurs, innovators, and creators—transforming it into a powerful tool for business growth. Built by a team that deeply cares about helping businesses succeed, Titan is more than just an email platform. Founded by Bhavin Turakhia, who also founded Directi, Radix, and Zeta, with a combined enterprise value exceeding $2 billion—Titan is backed by a strong legacy of innovation. Today, Titan powers millions of conversations, with 2.4 million emails sent and received every week. In 2021, Automattic (the parent company of WordPress, Your Way ) invested $30M in Titan, valuing the company at $300M. This partnership fuels our mission to revolutionize email and build the future of digital communication. At Titan, you’ll be part of a fast-growing business, solving meaningful problems and shaping a product that empowers millions. Join us to make a real impact. About Neo Neo is our fast-growing direct-to-customer platform designed to help small businesses, solopreneurs, and professionals establish a professional online presence with ease. Our offering includes domain name registration, an AI-powered website builder, and professional email—packaged at an affordable monthly rate to ensure accessibility for businesses of all sizes. We are now poised for our next phase of growth and are seeking a Growth Marketing Lead to accelerate new customer acquisition. About The Role Join a high-performing team of data scientists and cross-functional partners to uncover insights, drive product strategy, and shape the future of our products. You’ll work across the business to identify key opportunities, optimize campaigns, and inform go-to-market and product decisions with data at the core Roles And Responsibilities Work with both large and small datasets to solve a variety of business problems using rigorous analytical and statistical methods Apply technical expertise with managing data infrastructure, quantitative analysis, experimentation, dashboard building and data storytelling to develop actionable strategies and influence product and business decisions Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics, and initiatives to understand trends and performance Make sound, data-informed recommendations, even when data is sparse, through strong judgment and a structured approach to uncertainty Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions Skills And Qualifications Bachelor’s or Master’s degree in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or Economics et al 3+ years of experience in data science, analytics, or related roles Proficient in SQL and scripting languages such as Python Comfortable working with imperfect or incomplete data, and able to apply appropriate methodologies to extract insights Experience with data visualization tools (e.g., Metabase, Tableau, Power BI, QuickSight) Familiarity with machine learning techniques (e.g., regression, decision trees) is a bonus Nice to have: experience working with AWS Cloud, Apache AirFlow Perks And Benefits We at Titan love our jobs. And it’s no surprise – we get to work on exciting and new projects, in a vibrant atmosphere that is designed to be comfortable and conducive for our personal and professional growth. And Titan goes the extra mile to make you feel at home. We offer benefits ranging from affordable catered meals and even snacks on the house. Our workspaces are welcoming and fun, complete with bean bag chairs and ping pong tables. You are free to wear what makes you comfortable and choose the hours you keep as a team. Oh, and we’ve got your family covered too, with excellent health insurance plans and other benefits. In short, everything you need to be your best self at work! If you like the idea of working on solutions that have a truly global impact, get in touch! Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Position Overview: We are looking for an experienced and skilled Assistant Vice President (AVP) with 5 to 8 years of experience in software development and data engineering. The ideal candidate should have expertise in Python , Spark , Oracle , Big Data technologies , and Unix systems . This role requires a strong technical background and the ability to contribute to the design and implementation of innovative solutions. Key Responsibilities: Design, develop, and maintain scalable and efficient data processing pipelines using Python and Spark. Work with Big Data technologies to process and analyze large datasets. Manage and optimize Oracle databases, ensuring high performance and reliability. Develop and maintain scripts and automation processes on Unix systems. Collaborate with cross-functional teams to gather requirements and deliver technical solutions aligned with business needs. Troubleshoot and resolve technical issues across the tech stack. Stay updated with emerging technologies and contribute to the adoption of best practices. Required Skills and Qualifications: 5 to 8 years of hands-on experience in software development and data engineering. Proficiency in Python and Spark for data processing and analytics. Strong knowledge of Oracle databases, including SQL and performance tuning. Experience with Big Data technologies (e.g., Hadoop, Hive, HDFS). Solid understanding of Unix systems and shell scripting. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication and collaboration skills. Bachelor's degree in Computer Science, Engineering, or a related field. Preferred Qualifications: Experience with cloud platforms (e.g., AWS, Azure, or GCP). Familiarity with data pipeline orchestration tools (e.g., Apache Airflow). Knowledge of DevOps practices and CI/CD pipelines. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role We’re on a mission to completely change the way healthcare works by building the most powerful Healthcare Intelligence Platform ever made. Using an AI-first approach , our goal is to turn complicated health data into real-time insights that help hospitals, clinics, pharmaceutical companies, and researchers make faster, smarter decisions. We're building a unified platform from the ground up — specifically for healthcare . This platform will bring together everything from: Collecting data from different systems (Data Acquisition) Combining and cleaning it (Integration, Data Quality) Managing patient records and provider info (Master Data Management) Tagging and organizing it (Data Classification & Governance) Running analytics and building AI models (Analytics, AI Studio) Creating custom healthcare apps (App Marketplace) Using AI as a built-in assistant (AI as BI + Agent-first approach) This platform will let healthcare teams build solutions quickly — without starting from scratch each time. For example, they’ll be able to: Track and manage kidney disease patients across different hospitals Speed up clinical trials by analyzing real-world patient data Help pharmacies manage their stock better with predictive supply chain tools Detect early signs of diseases like diabetes or cancer with machine learning Ensure regulatory compliance automatically through built-in checks This is a huge, complex, and high-impact challenge , and we’re looking for a Software Development Engineer III to help lead the way. In this role, you’ll: Design and build scalable, secure, and reliable systems Create core features like data quality checks , metadata management , data lineage tracking , and privacy/compliance layers Work closely with other engineers, product managers, and healthcare experts to bring the platform to life If you're passionate about using technology to make a real difference in the world — and enjoy solving big engineering problems — we'd love to connect. A Day in the Life Architect, design, and build scalable data tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and evangelize standards and best practices across data engineering teams. Own the end-to-end lifecycle of tooling – from prototyping to production deployment. Mentor and guide junior engineers and contribute to technical leadership across the organization. Drive innovation in privacy-by-design, regulatory compliance (e.g., HIPAA), and data observability solutions. What You Need 6+ years of experience in software engineering with strong experience building distributed systems. Proficient in backend development (Python, Java, or Scala or Go) and familiar with RESTful API design. Expertise in modern data stacks: Kafka, Spark, Airflow, Snowflake etc. Experience with open-source data governance frameworks like Apache Atlas, Amundsen, or DataHub is a big plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their native data governance offerings. Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings, and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EHR-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com . Check us out on YouTube , Glassdoor , LinkedIn , and innovaccer.com . Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About The Role Uber sends billions of messages to our users across channels such as Email, Push, SMS, WhatsApp, and in-app surfaces, through an internally built CRM system. We're looking for a Product Manager to lead the development of marketing measurement and insight-generation tools. This role will focus on enabling clear performance tracking, consistent measurement, and data-driven decision-making-empowering teams across Uber to optimize marketing efforts with confidence and speed. What the Candidate Will Do Partner with Marketing, Data Science, Engineering, and other cross-functional teams to deeply understand business needs and define measurement strategies. Drive the product vision, roadmap, and execution Build and refine underlying data processes and pipelines to ensure reliable, high-quality datasets that power measurement and insight generation. Collaborate with Engineering to design, implement, and maintain scalable data systems (e.g., data lakes, ETL frameworks) supporting marketing workflows. Develop intuitive dashboards and analytics tools that surface actionable insights on campaign performance, audience engagement, channel effectiveness, and overall marketing impact. Establish frameworks for consistent marketing measurement, including attribution, incrementality, and experimentation, ensuring alignment across diverse teams and markets. Collaborate with stakeholders to define KPIs, track impact, and foster continuous improvement in data-driven marketing decisions. Champion data governance and best practices so that marketers can trust and confidently act on insights. Basic Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related technical or analytical field. 5+ years of product management experience with a focus on data platforms, analytics, or business intelligence. Strong understanding of marketing measurement, data modeling, and reporting best practices. Experience working with large-scale data infrastructure and tools (e.g., SQL, Looker, BigQuery, Airflow). Ability to translate complex data requirements into simple, user-centric products. Strong cross-functional collaboration and communication skills. Preferred Qualifications Master's degree in a technical field Experience in digital marketing, CRM, or MarTech environments. Familiarity with experimentation and incrementality testing. Interest in applying AI/ML to enhance marketing analytics and insights. Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies