Home
Jobs

2873 Airflow Jobs - Page 44

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

India

On-site

Linkedin logo

Must have at least 3 years of professional Python & ML experience and Master's Degree in Computer Science or Equivalent About Us: We are a fashion-focused e-commerce company leveraging cutting-edge AI technologies to transform how customers discover products. Our platform integrates intelligent search and recommendation systems to deliver a personalized shopping experience. We analyze user behavior, micro/macro fashion trends, and product metadata to curate and rank content dynamically. Role Overview: We are seeking a skilled Data Scientist with strong experience in building recommendation systems to join our growing team. You will play a critical role in designing and optimizing personalized experiences for millions of users by transforming raw data into insights and automated systems. Key Responsibilities: Design, build, and deploy scalable recommendation engines using collaborative filtering, content-based methods, or hybrid approaches. Develop user profiling models using clickstream and behavioral data. Leverage AI-driven product tagging to enhance metadata quality and retrieval. Analyze macro and micro fashion trends to influence product rankings. Extract insights from large-scale user data and convert them into actionable models. Work closely with engineers and product managers to integrate models into production. Develop and monitor metrics for model performance and user engagement impact. Required Skills and Qualifications: 2+ years of experience in data science, ideally in e-commerce or consumer-tech. Hands-on experience building and deploying recommendation systems (e.g., matrix factorization, deep learning-based recommenders, implicit/explicit feedback models). Proficiency in Python and machine learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch, LightFM). Experience with data analysis tools such as SQL, Pandas, and Jupyter. Strong grasp of personalization techniques and user segmentation strategies. Solid understanding of product ranking using behavioral data and trend signals. Experience working with large-scale data pipelines and A/B testing frameworks. Strong communication and problem-solving skills. Preferred Qualifications: Experience in the fashion or lifestyle e-commerce domain. Knowledge of modern MLops workflows and model monitoring tools. Familiarity with cloud platforms (AWS, GCP) and tools like Airflow or DBT. Background in NLP or computer vision for fashion tagging is a plus. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

About Us Udacity is on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey. At Udacity, the Analytics Team is deploying data to inform and empower the company with insight, to drive student success and business value. We are looking for a Principal Data Analyst to help advance that vision as part of our business analytics group. You will work with stakeholders to help inform their current initiatives and long term roadmap with data. You will be a key part of a dynamic data team that works daily with strategic partners to deliver data, prioritize resources and scale our impact. This is a chance to affect thousands of students around the world who come to Udacity to improve their lives, and your success as part of a world-class analytics organization will be visible up to the highest levels of the company. Your Responsibilities You will report to the Director of Data and lead high-impact analyses of Udacity’s curriculum and learner behavior to optimize content strategy, ensure skills alignment with industry needs, and drive measurable outcomes for learners and enterprise clients Lead the development of a strategic analytics roadmap for Udacity’s content organization, aligning insights with learning, product, and business goals. Partner with senior stakeholders to define and monitor KPIs that measure the health, efficacy, and ROI of our curriculum across both B2C and enterprise portfolios Centralize and synthesize learner feedback, CX signals, and performance data to identify content pain points and inform roadmap prioritization. Develop scalable methods to assess content effectiveness by integrating learner outcomes, usage behavior, and engagement metrics. Contribute to building AI-powered systems that classify learner feedback, learning styles, and success predictors. Act as a thought partner to leaders across Content and Product by communicating insights clearly and influencing strategic decisions. Lead cross-functional analytics initiatives and mentor peers and junior analysts to elevate data maturity across the organization. Requirements 8+ years experience in analytics or data science roles with a focus on product/content insights, ideally in edtech or SaaS. Advanced SQL and experience with data warehouses (Athena, Presto, Redshift, etc.). Strong proficiency in Python for data analysis, machine learning, and automation. Experience with dashboards and visualization tools (e.g., Tableau, PowerBI, or similar). Strong knowledge of experimentation, A/B testing, and causal inference frameworks. Proven ability to lead high-impact analytics projects independently and influence stakeholders Excellent communication skills—able to translate technical insights into business recommendations Preferred Experience Familiarity with Tableau, Amplitude, dbt, Airflow, or similar tools Experience working with large-scale sequential or clickstream data Exposure to NLP, embeddings, or GPT-based analysis for feedback classification Understanding of learning science or instructional design principles Benefits Experience a rewarding work environment with Udacity's perks and benefits! At Udacity, we offer you the flexibility of working from home. We also have in-person collaboration spaces in Mountain View, Cairo, Dubai and Noida and continue to build opportunities for team members to connect in person Flexible working hours Paid time off Comprehensive medical insurance coverage for you and your dependents Employee wellness resources and initiatives (access to wellness platforms like Headspace) Quarterly wellness day off Personalized career development Unlimited access to Udacity Nanodegrees What We Do Forging futures in tech is our vision. Udacity is where lifelong learners come to learn the skills they need, to land the jobs they want, and to build the lives they deserve. Don’t stop there! Please keep reading... You’ve probably heard the following statistic: Most male applicants only meet 60% of the qualifications, while women and other marginalized candidates only apply if they meet 100% of the qualifications. If you think you have what it takes but don’t meet every single point in the job description, please apply! We believe that historically, many processes disproportionately hurt the most marginalized communities in society- including people of color, working-class backgrounds, women and LGBTQ people. Centering these communities at our core is pivotal for any successful organization and a value we uphold steadfastly. Therefore, Udacity strongly encourages applications from all communities and backgrounds. Udacity is proud to be an Equal Employment Opportunity employer. Please read our blog post for “6 Reasons Why Diversity, Equity, and Inclusion in the Workplace Exists” Last, but certainly not least… Udacity is committed to creating economic empowerment and a more diverse and equitable world. We believe that the unique contributions of all Udacians is the driver of our success. To ensure that our products and culture continue to incorporate everyone’s perspectives and experience we never discriminate on the basis of race, color, religion, sex, gender, gender identity or expression, sexual orientation, marital status, national origin, ancestry, disability, medical condition (including genetic information), age, veteran status or military status, or any other basis protected by federal, state or local laws. As part of our ongoing work to build more diverse teams at Udacity, when applying, you will be asked to complete a voluntary self-identification survey. This survey is anonymous, we are unable to connect your application with your survey responses. Please complete this voluntary survey as we utilize the data for diversity measures in terms of gender and ethnic background in both our candidates and our Udacians. We consider this data seriously and appreciate your willingness to complete this step in the process, if you choose to do so. Udacity's Values Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist Udacity's Terms of Use and Privacy Policy Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 17 Lacs

Chennai

Work from Office

Naukri logo

Greetings from SwaaS! Role: ETL Developer Experience: 5+ years Location: Guindy, Chennai (On-site) Immediate Joiners preferred Mandatory Skills: 8+ years in ETL development, with 4+ years of hands-on AWS PySpark scripting. Strong experience in AWS services: S3, Lambda, SNS, Step Functions. Expertise in PySpark and Python (NumPy, Pandas). Ability to work independently as an individual contributor. Solid understanding of AWS-based data pipelines and solutions. Good to Have: Experience processing large volumes of semi-structured and structured data. Familiarity with building data lakes and Delta Lake configurations. Knowledge of metadata management, data lineage, and data governance principles. Proficiency in cost-efficient computing and building scalable data integration frameworks. Experience with MWAA (Managed Workflows for Apache Airflow). Soft Skills: Strong communication skills for engaging with IT and business stakeholders. Ability to understand challenges and drive effective data delivery.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Hi, Position: Python Developer Location: Hyderabad, India As a Python Developer design and build robust Python applications, emphasizing scalable, cloud-native solutions Apply .NET and general programming skills to support cross-functional projects Job requirements: We are looking to fill a programming-intensive role that focuses primarily on Python development, with secondary expertise in AWS infrastructure Leverage AWS services like Lambda, CDK, CloudFront, and Bedrock for development and deployments Optimize infrastructure with expertise in AWS deployments and networking principles Utilize tools like Airflow to streamline workflows and enhance system efficiency Proficiency in Python for application development Join our innovative team to drive cutting-edge cloud solutions • BS in Computer Science or equivalent degree. • Experience with Python libraries for ML, NLP, Search, AI. • Ability to work in an Agile environment. • Excellent communication skills. Please share your resume at Anusha@americanunit.com Show more Show less

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior consultant specialist. In this role, you will: Design, implement and maintain robust CI/CD pipelines using Jenkins and Ansible Deploy, monitor and manage applications and services on Kubernetes Support and maintain Airflow DAGs for data processing workflows Ensure operational stability and performance of spark jobs Implement monitoring, logging and alerting systems for daily issue detection and resolution Collaborate with developers, data engineers and BAs to improve release velocity and reliability Maintaining infrastructure as code and ensure secure and compliant deployments Manage DORA (DevOps) metrics, ITSM & Technology controls, Service Quality KPIs. Responsible to support production releases, flexible in working hours, ready to work in shift and On call Requirements To be successful in this role, you should meet the following requirements: 6-8 years of experience in DevOps or a similar role Strong experience with CI/CD Tools – Jenkins, Ansible Hands-on experience deploying and managing applications on Kubernetes Familiarity with Airflow – developing and troubleshooting DAGs Knowledge of Apache Spark operations and job management in distributed environments Good understanding of scripting languages (bash, python etc) Ability to work at pace, communicate clearly, operate independently and a good team player Good to have – knowledge of Quantexa software. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title: Production Specialist, AVP Location: Pune, India Role Description Our organization within Deutsche Bank is AFC Production Services. We are responsible for providing technical L2 application support for business applications. The AFC (Anti-Financial Crime) line of business has a current portfolio of 25+ applications. The organization is in process of transforming itself using Google Cloud and many new technology offerings. As an Assistant Vice President, your role will include hands-on production support and be actively involved in technical issues resolution across multiple applications. You will also be working as application lead and will be responsible for technical & operational processes for all application you support. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Provide technical support by handling and consulting on BAU, Incidents/emails/alerts for the respective applications. Perform post-mortem, root cause analysis using ITIL standards of Incident Management, Service Request fulfillment, Change Management, Knowledge Management, and Problem Management. Manage regional L2 team and vendor teams supporting the application. Ensure the team is up to speed and picks up the support duties. Build up technical subject matter expertise on the applications being supported including business flows, application architecture, and hardware configuration. Define and track KPIs, SLAs and operational metrics to measure and improve application stability and performance. Conduct real time monitoring to ensure application SLAs are achieved and maximum application availability (up time) using an array of monitoring tools. Build and maintain effective and productive relationships with the stakeholders in business, development, infrastructure, and third-party systems / data providers & vendors. Assist in the process to approve application code releases as well as tasks assigned to support to perform. Keep key stakeholders informed using communication templates. Approach support with a proactive attitude, desire to seek root cause, in-depth analysis, and strive to reduce inefficiencies and manual efforts. Mentor and guide junior team members, fostering technical upskill and knowledge sharing. Provide strategic input into disaster recovery planning, failover strategies and business continuity procedures Collaborate and deliver on initiatives and install these initiatives to drive stability in the environment. Perform reviews of all open production items with the development team and push for updates and resolutions to outstanding tasks and reoccurring issues. Drive service resilience by implementing SRE(site reliability engineering) principles, ensuring proactive monitoring, automation and operational efficiency. Ensure regulatory and compliance adherence, managing audits,access reviews, and security controls in line with organizational policies. The candidate will have to work in shifts as part of a Rota covering APAC and EMEA hours between 07:00 IST and 09:00 PM IST (2 shifts). In the event of major outages or issues we may ask for flexibility to help provide appropriate cover. Weekend on-call coverage needs to be provided on rotational/need basis. Your Skills And Experience 9-15 years of experience in providing hands on IT application support. Experience in managing vendor team’s providing 24x7 support. Preferred: Team lead role experience, Experience in an investment bank, financial institution. Bachelor’s degree from an accredited college or university with a concentration in Computer Science or IT-related discipline (or equivalent work experience/diploma/certification). Preferred: ITIL v3 foundation certification or higher. Knowledgeable in cloud products like Google Cloud Platform (GCP) and hybrid applications. Strong understanding of ITIL /SRE/ DEVOPS best practices for supporting a production environment. Understanding of KPIs, SLO, SLA and SLI Monitoring Tools: Knowledge of Elastic Search, Control M, Grafana, Geneos, OpenShift, Prometheus, Google Cloud Monitoring, Airflow,Splunk. Working Knowledge of creation of Dashboards and reports for senior management Red Hat Enterprise Linux (RHEL) – professional skill in searching logs, process commands, start/stop processes, use of OS commands to aid in tasks needed to resolve or investigate issues. Shell scripting knowledge a plus. Understanding of database concepts and exposure in working with Oracle, MS SQL, Big Query etc. databases. Ability to work across countries, regions, and time zones with a broad range of cultures and technical capability. Skills That Will Help You Excel Strong written and oral communication skills, including the ability to communicate technical information to a non-technical audience and good analytical and problem-solving skills. Proven experience in leading L2 support teams, including managing vendor teams and offshore resources. Able to train, coach, and mentor and know where each technique is best applied. Experience with GCP or another public cloud provider to build applications. Experience in an investment bank, financial institution or large corporation using enterprise hardware and software. Knowledge of Actimize, Mantas, and case management software is good to have. Working knowledge of Big Data – Hadoop/Secure Data Lake is a plus. Prior experience in automation projects is great to have. Exposure to python, shell, Ansible or other scripting language for automation and process improvement Strong stakeholder management skills ensuring seamless coordination between business, development, and infrastructure teams. Ability to manage high-pressure issues, coordinating across teams to drive swift resolution. Strong negotiation skills with interface teams to drive process improvements and efficiency gains. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Lead MLE Developer at ZS ZS's Scaled AI practice is part of ZS rich and advanced AI ecosystem. We are building next generation AI based analytics products, focused on creatin innovative machine learning, and engineering capabilities. Our team comprises of Data Scientists, ML/Full Stack Engineers, UI Developers, Product/ Program managers and QA testers working together to build products which offers unique analytical solutions to our clients. What You’ll Do Build, Refine and Use ML Engineering platforms and components. Scaling machine learning algorithms to work on massive data sets and strict SLAs. Build and orchestrate model pipelines including feature engineering, inferencing and continuous model training. Implement ML Ops including model KPI measurements, tracking, model drift & model feedback loop. Collaborate with client facing teams to understand business context at a high level and contribute in technical requirement gathering. Implement basic features aligning with technical requirements. Write production-ready code that is easily testable, understood by other developers and accounts for edge cases and errors. Ensure highest quality of deliverables by following architecture/design guidelines, coding best practices, periodic design/code reviews. Write unit tests as well as higher level tests to handle expected edge cases and errors gracefully, as well as happy paths. Uses bug tracking, code review, version control and other tools to organize and deliver work. Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues and dependencies. Consistently contribute in researching & evaluating latest architecture patterns/technologies through rapid learning, conducting proof-of-concepts and creating prototype solutions. What You’ll Bring A master’s or bachelor’s degree in Computer Science or related field from a top university. 5+ years’ hands-on experience in ML development. Good fundamentals of machine learning Strong programming expertise in Python, PySpark/Scala. Expertise in crafting ML Models for high performance and scalability. Experience in implementing feature engineering, inferencing pipelines, and real time model predictions. Experience in ML Ops to measure and track model performance, experience working with MLFlow Experience with Spark or other distributed computing frameworks. Experience in ML platforms like Sage maker, Kubeflow. Experience with pipeline orchestration tools such Airflow. Experience in deploying models to cloud services like AWS, Azure, GCP, Azure ML. Expertise in SQL, SQL DB's. Knowledgeable of core CS concepts such as common data structures and algorithms. Collaborate well with teams with different backgrounds / expertise / functions. Additional Skills Understanding of DevOps, CI / CD, data security, experience in designing on cloud platform; Experience in data engineering in Big Data systems Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 1 week ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Experience: More than 3 years in data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Technical Skills: Proficiency in Databricks platform, management, and optimization. Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS. Proven experience in data engineering performance tuning and analytical understanding in business and program contexts. Solid experience in Python development, specifically in pySpark within the AWS Cloud environment, including experience with Terraform. Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying. Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration. Understanding of continuous deployment (CI/CD) processes. Experience with Airflow and additional Apache Spark knowledge is advantageous. Exposure to ETL tools, including Informatica. Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Key skills: Python, SQL, Pyspark, Databricks, AWS (Manadate) Added advantage: Life sciences/Pharma Roles and Responsibilities 1.Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2.Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3.Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4.Automation and Workflow Management: Automate data workflows using tools like Airflow, Data bricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5.Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6.Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7.Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8.Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimizing resource usage, setting up lifecycle policies, and automating cost alerts. 9.Data Security and Compliance: Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10.Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks.

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Job Summary We are seeking a Data Scientist with 6 to 9 years of experience to join our team. The ideal candidate will have expertise in PySpark Statistics Azure Open AI Service EDA Airflow OpenCV Artificial Intelligence Natural Language Processing Deep Learning and PyTorch. This role requires working from the office during day shifts with no travel required. Responsibilities Develop and implement data models using PySpark to analyze large datasets. Conduct statistical analysis to identify trends and patterns in data. Utilize Azure Open AI Service to build and deploy AI models. Perform Exploratory Data Analysis (EDA) to uncover insights from data. Manage and orchestrate data workflows using Airflow. Apply OpenCV techniques for image processing and computer vision tasks. Develop and deploy Artificial Intelligence solutions to solve complex problems. Implement Natural Language Processing (NLP) models to analyze and interpret text data. Design and train Deep Learning models using PyTorch. Collaborate with cross-functional teams to understand business requirements and translate them into data solutions. Provide actionable insights and recommendations based on data analysis. Ensure data quality and integrity throughout the data lifecycle. Stay updated with the latest advancements in data science and AI technologies. Qualifications Must have strong experience in PySpark for data processing and analysis. Should possess in-depth knowledge of statistical methods and their applications. Must be proficient in using Azure Open AI Service for AI model deployment. Should have hands-on experience in performing EDA to derive insights. Must be skilled in using Airflow for workflow management. Should have expertise in OpenCV for image processing tasks. Must have experience in developing AI solutions for various applications. Should be proficient in NLP techniques for text analysis. Must have experience in training Deep Learning models using PyTorch. Should have excellent problem-solving skills and attention to detail. Must be able to work collaboratively in a team environment. Should have strong communication skills to present findings effectively. Certifications Required Certified Data Scientist Microsoft Certified: Azure AI Engineer Associate PyTorch Developer Certification Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible forSkilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Where: Hyderabad/ Bengaluru, India (Hybrid Mode 3 Days/Week in Office) Job Description Collaborate with stakeholders to develop a data strategy that meets enterprise needs and industry requirements. Create an inventory of the data necessary to build and implement a data architecture. Envision data pipelines and how data will flow through the data landscape. Evaluate current data management technologies and what additional tools are needed. Determine upgrades and improvements to current data architectures. Design, document, build and implement database architectures and applications. Should have hands-on experience in building high scale OLAP systems. Build data models for database structures, analytics, and use cases. Develop and enforce database development standards with solid DB/ Query optimizations capabilities. Integrate new systems and functions like security, performance, scalability, governance, reliability, and data recovery. Research new opportunities and create methods to acquire data. Develop measures that ensure data accuracy, integrity, and accessibility. Continually monitor, refine, and report data management system performance. Required Qualifications And Skillset Extensive knowledge of Azure, GCP clouds, and DataOps Data Eco-System (super strong in one of the two clouds and satisfactory in the other one) Hands-on expertise in systems like Snowflake, Synapse, SQL DW, BigQuery, and Cosmos DB. (Expertise in any 3 is a must) Azure Data Factory, Dataiku, Fivetran, Google Cloud Dataflow (Any 2) Hands-on experience in working with services/technologies like - Apache Airflow, Cloud Composer, Oozie, Azure Data Factory, and Cloud Data Fusion (Expertise in any 2 is required) Well-versed with Data services, integration, ingestion, ELT/ETL, Data Governance, Security, and Meta-driven Development. Expertise in RDBMS (relational database management system) – writing complex SQL logic, DB/Query optimization, Data Modelling, and managing high data volume for mission-critical applications. Strong grip on programming using Python and PySpark. Clear understanding of data best practices prevailing in the industry. Preference to candidates having Azure or GCP architect certification. (Either of the two would suffice) Strong networking and data security experience. Awareness Of The Following Application development understanding (Full Stack) Experience on open-source tools like Kafka, Spark, Splunk, Superset, etc. Good understanding of Analytics Platform Landscape that includes AI/ML Experience in any Data Visualization tool like PowerBI / Tableau / Qlik /QuickSight etc. About Us Gramener is a design-led data science company. We build custom Data & AI solutions that help solve complex business problems with actionable insights and compelling data stories. We partner with enterprise data and digital transformation teams to improve the data-driven decision-making culture across the organization. Our open standard low-code platform, Gramex, rapidly builds engaging Data & AI solutions across multiple business verticals and use cases. Our solutions and technology have been recognized by analysts such as Gartner and Forrester and have won several awards. We Offer You a chance to try new things & take risks. meaningful problems you'll be proud to solve. people you will be comfortable working with. transparent and innovative work environment. To know more about us visit Gramener Website and Gramener Blog. If anyone looking for the same, kindly share below mentioned details. Total Experience Relevant Experience: Ctct Notice Period: Ectc Current Location: Skills:- OLAP, Microsoft Azure, Architecting Show more Show less

Posted 1 week ago

Apply

7.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities 3+ years of experience with Snowflake (Snowpipe, Streams, Tasks) Strong proficiency in SQL for high-performance data transformations Hands-on experience building ELT pipelines using cloud-native tools Proficiency in dbt for data modeling and workflow automation Python skills (Pandas, PySpark, SQLAlchemy) for data processing Experience with orchestration tools like Airflow or Prefect Preferred candidate profile Hands-on with Python, including libraries like Pandas, PySpark, or SQLAlchemy. Experience with data cataloging, metadata manage1nent, and column-level lineage. Exposure to BI tools like Tableau, or Power Bl. Certifications: Snowflake SnowPro Core Certification preferred. Contact details: Sindhu@iflowonline.com or 9154984810

Posted 1 week ago

Apply

9.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description The Oracle Cloud Infrastructure (OCI) team offers a unique opportunity to design, build, and operate a comprehensive suite of large-scale, integrated cloud services within a broadly distributed, multi-tenant cloud environment. With a commitment to delivering exceptional cloud products, OCI empowers customers to tackle some of the world's most pressing challenges, providing tailored solutions that meet their evolving needs. Are you passionate about designing and building large-scale distributed monitoring and analytics solutions for the cloud? Do you thrive in environments that combine the agility and innovation of a startup with the resources and stability of a Fortune 100 company? As a member of our fast-growing team, you'll enjoy a high degree of autonomy, diverse challenges, and unparalleled opportunities for growth. This role offers substantial upside potential, high visibility, and accelerated career advancement. Join our team of talented individuals and tackle complex problems in distributed systems, data processing, metrics collection, data analytics, network monitoring, and multi-tenant Infrastructure-as-a-Service (IaaS) at massive scale, driving innovation and excellence in the cloud. We are seeking an experienced Principal Engineer to design and develop software, including automated test suites, for major components in our Network Monitoring & Analytics Stack. As a member of our team, you will have the opportunity to build large-scale distributed monitoring and analytics solutions for the cloud, working with a talented group of engineers to solve complex problems in distributed systems, data processing, and network monitoring. Do you thrive in a fast-paced environment, and want to be an integral part of a truly great team? Come join us! Required Qualifications: 9+ years of experience in software development 3+ years of experience in developing large scale distributed services/applications Proficiency with Java/Python/C++/Go and Object-Oriented programming Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Bachelors degree in Computer Science Desired Qualifications: Knowledge of cloud computing & networking technologies including monitoring services Networking Management Technologies such as SNMP, gNMI, protobuf, YANG Models etc Networking Technologies such as L2/L3, TCP/IP, sockets, BGP, OSPF, LLDP, ICMP etc Experience developing service-oriented systems Exposure to Kafka, Prometheus, Spark, Airflow, Flink or other open-source distributed data streaming platforms and databases Experience developing automated test suites Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Responsibilities Design and develop software for major components in our Network Monitoring & Analytics Stack Build complex distributed systems involving large amounts of data handling, including collecting metrics, building data pipelines, and analytics for real-time processing, online processing, and batch processing Develop automated test suites to ensure high-quality solutions Collaborate with cross-functional teams to deliver cloud services that meet customer needs Participate in an agile environment, contributing to the development of innovative new systems to power business-critical applications Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Senior Consultant Career Level: D Introduction to role Are you ready to disrupt an industry and change lives? As a Senior Consultant within our Operations IT team, you'll be at the forefront of transforming our ability to develop life-changing medicines. Our work directly impacts patients, empowering the business to perform at its peak by combining innovative science with leading digital technology platforms and data. Join us in our journey to become a digital and data-led enterprise, where you'll collaborate with a diverse team of UI/UX designers, architects, full-stack engineers, data engineers, and DevOps engineers. Together, we'll develop high-impact, scalable, and innovative products that deliver actionable insights and support our business goals. Are you ready to make the impossible possible? Accountabilities As a Full Stack Data Engineer, you will: Develop, maintain, and optimize scalable data pipelines for data integration, transformation, and analysis, ensuring high performance and reliability. Demonstrate proficiency in ETL/ELT processes, including writing, testing, and maintaining high-quality code for data ingestion and transformation. Improve the efficiency and performance of data pipeline and workflows, applying advanced data engineering techniques and standard methodologies. Develop and maintain data models that represent data structure and relationships, ensuring alignment with business requirements and enhancing data usability. Develop APIs and microservices for seamless data integration across platforms and collaborate with software engineers to integrate front-end and back-end components with the data platform. Optimize and tune databases and queries for maximum performance and reliability and maintain existing data pipelines to improve performance and quality. Mentor other developers on standard methodologies, conduct peer programming, code reviews, and help evolve systems architecture to consistently improve development efficiency. Ensure compliance with data security and privacy regulations and implement data validation and cleansing techniques to maintain consistency. Stay updated with emerging technologies, standard methodologies in data engineering and software development, and contribute to all phases of the software development lifecycle (SDLC) processes. Work closely with data scientists, analysts, partners, and product managers to understand requirements, deliver high-quality data solutions, and support alignment of data sources and specifications. Perform unit testing, system integration testing, regression testing, and assist with user acceptance testing to ensure data solutions meet quality standards. Work with the QA Team to develop testing protocols and identify and correct challenges. Maintain clear documentation for Knowledge Base Articles (KBAs), data models, pipeline documentation, and deployment release notes. Diagnose and resolve complex issues related to data pipelines, backend services, and frontend applications, ensuring smooth operation and user satisfaction. Use and manage cloud-based services (e.g., AWS) for data storage and processing, and implement and manage CI/CD pipelines, version control, and deployment processes. Liaise with internal teams and third-party vendors to address application issues and project needs effectively. Create and maintain data visualizations and dashboards to provide actionable insights. Essential Skills/Experience Minimum 7+ years of experience in developing and delivering software engineering and data engineering solutions. Extensive experience with ELT/ETL tools such as SnapLogic, FiveTran, or similar. Deep expertise in Snowflake, DBT (Data Build Tool), and similar data warehousing technologies. Proficient in designing and optimizing data models and transformations for large-scale data systems. Strong knowledge of data pipeline principles, including dimensional modelling, schema design, and data integration patterns. Familiarity with Data Mesh and Data Product concepts, including experience in delivering and managing data products. Strong data orchestration skills to effectively manage and streamline data workflows and processes. Proficiency in data visualization technologies, with experience in advanced use of tools such as Power BI or similar. Solid understanding of DevOps practices, including CI/CD pipelines, version control systems like GitHub. Ability to implement and maintain automated deployment and integration processes. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Automated testing frameworks (Unit Test, system integration testing, regression testing & data testing). Strong proficiency in programming languages such as Python, Java, or Similar. Experience with both relational (e.g., MySQL, PostgreSQL) and NoSQL databases. Deep technical expertise in building software and analytical solutions with modern JavaScript stack (Node.js, ReactJS, AngularJS). Strong knowledge of cloud-based data, compute, and storage services, including AWS (S3, EC2, RDS, EBS, Lambda), orchestration services (e.g., Airflow, MWAA), containerization services (e.g., ECS, EKS). Excellent communication and interpersonal skills, with a proven ability of managing partner expectations, gathering requirements, translating them into technical solutions. Experience working in Agile development environments, with a strong understanding of Agile principles and practices. Ability to adapt to changing requirements and contribute to iterative development cycles. Advanced SQL skills for data analysis. Expertise on problem-solving skills with a focus on finding innovative solutions to complex data challenges. Strong analytical and reasoning skills, with the ability to visualize processes and outcomes. Strategic thinker with a focus on finding innovative solutions to complex data challenges. Desirable Skills/Experience Bachelor's or Master's degree in health sciences, Life Sciences, Data Management, Information Technology or a related field or equivalent experience. Significant experience working in the pharmaceuticals industry with a deep understanding of industry-specific data requirements. Demonstrated ability to manage and collaborate with a diverse range of partners ensuring high levels of satisfaction and successful project delivery. Proven capability to work independently and thrive in a dynamic fast-paced environment managing multiple tasks adapting to evolving conditions. Experience working in large multinational organizations especially within pharmaceutical or similar environments demonstrating familiarity with global data systems processes. Certification in AWS Cloud or other relevant data engineering or software engineering certifications showcasing advanced knowledge technical proficiency. Awareness of use case specific GenAI tools available in the market and their application in day-to-day work scenarios. Working knowledge of basic prompting techniques and commitment to continuous improvement of these skills. Ability to stay up to date with developments in AI and GenAI, applying new insights to work-related situations. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work on average a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique ambitious world. AstraZeneca is where innovation meets impact! We couple technology with an inclusive approach to cross international boundaries developing a leading ecosystem. Our diverse teams work multi-functionally at scale bringing together the best minds from across the globe uncovering new solutions. We think holistically about applying technology building partnerships inside out driving simplicity efficiencies making a real difference. Ready to make your mark? Apply now! Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Senior Consultant Career Level: D Introduction to role Are you ready to disrupt an industry and change lives? As a Senior Consultant within our Operations IT team, you'll be at the forefront of transforming our ability to develop life-changing medicines. Our work directly impacts patients, empowering the business to perform at its peak by combining innovative science with leading digital technology platforms and data. Join us in our journey to become a digital and data-led enterprise, where you'll collaborate with a diverse team of UI/UX designers, architects, full-stack engineers, data engineers, and DevOps engineers. Together, we'll develop high-impact, scalable, and innovative products that deliver actionable insights and support our business goals. Are you ready to make the impossible possible? Accountabilities As a Full Stack Data Engineer, you will: Develop, maintain, and optimize scalable data pipelines for data integration, transformation, and analysis, ensuring high performance and reliability. Demonstrate proficiency in ETL/ELT processes, including writing, testing, and maintaining high-quality code for data ingestion and transformation. Improve the efficiency and performance of data pipeline and workflows, applying advanced data engineering techniques and standard methodologies. Develop and maintain data models that represent data structure and relationships, ensuring alignment with business requirements and enhancing data usability. Develop APIs and microservices for seamless data integration across platforms and collaborate with software engineers to integrate front-end and back-end components with the data platform. Optimize and tune databases and queries for maximum performance and reliability and maintain existing data pipelines to improve performance and quality. Mentor other developers on standard methodologies, conduct peer programming, code reviews, and help evolve systems architecture to consistently improve development efficiency. Ensure compliance with data security and privacy regulations and implement data validation and cleansing techniques to maintain consistency. Stay updated with emerging technologies, standard methodologies in data engineering and software development, and contribute to all phases of the software development lifecycle (SDLC) processes. Work closely with data scientists, analysts, partners, and product managers to understand requirements, deliver high-quality data solutions, and support alignment of data sources and specifications. Perform unit testing, system integration testing, regression testing, and assist with user acceptance testing to ensure data solutions meet quality standards. Work with the QA Team to develop testing protocols and identify and correct challenges. Maintain clear documentation for Knowledge Base Articles (KBAs), data models, pipeline documentation, and deployment release notes. Diagnose and resolve complex issues related to data pipelines, backend services, and frontend applications, ensuring smooth operation and user satisfaction. Use and manage cloud-based services (e.g., AWS) for data storage and processing, and implement and manage CI/CD pipelines, version control, and deployment processes. Liaise with internal teams and third-party vendors to address application issues and project needs effectively. Create and maintain data visualizations and dashboards to provide actionable insights. Essential Skills/Experience Minimum 7+ years of experience in developing and delivering software engineering and data engineering solutions. Extensive experience with ELT/ETL tools such as SnapLogic, FiveTran, or similar. Deep expertise in Snowflake, DBT (Data Build Tool), and similar data warehousing technologies. Proficient in designing and optimizing data models and transformations for large-scale data systems. Strong knowledge of data pipeline principles, including dimensional modelling, schema design, and data integration patterns. Familiarity with Data Mesh and Data Product concepts, including experience in delivering and managing data products. Strong data orchestration skills to effectively manage and streamline data workflows and processes. Proficiency in data visualization technologies, with experience in advanced use of tools such as Power BI or similar. Solid understanding of DevOps practices, including CI/CD pipelines, version control systems like GitHub. Ability to implement and maintain automated deployment and integration processes. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Automated testing frameworks (Unit Test, system integration testing, regression testing & data testing). Strong proficiency in programming languages such as Python, Java, or Similar. Experience with both relational (e.g., MySQL, PostgreSQL) and NoSQL databases. Deep technical expertise in building software and analytical solutions with modern JavaScript stack (Node.js, ReactJS, AngularJS). Strong knowledge of cloud-based data, compute, and storage services, including AWS (S3, EC2, RDS, EBS, Lambda), orchestration services (e.g., Airflow, MWAA), containerization services (e.g., ECS, EKS). Excellent communication and interpersonal skills, with a proven ability of managing partner expectations, gathering requirements, translating them into technical solutions. Experience working in Agile development environments, with a strong understanding of Agile principles and practices. Ability to adapt to changing requirements and contribute to iterative development cycles. Advanced SQL skills for data analysis. Expertise on problem-solving skills with a focus on finding innovative solutions to complex data challenges. Strong analytical and reasoning skills, with the ability to visualize processes and outcomes. Strategic thinker with a focus on finding innovative solutions to complex data challenges. Desirable Skills/Experience Bachelor's or Master's degree in health sciences, Life Sciences, Data Management, Information Technology or a related field or equivalent experience. Significant experience working in the pharmaceuticals industry with a deep understanding of industry-specific data requirements. Demonstrated ability to manage and collaborate with a diverse range of partners ensuring high levels of satisfaction and successful project delivery. Proven capability to work independently and thrive in a dynamic fast-paced environment managing multiple tasks adapting to evolving conditions. Experience working in large multinational organizations especially within pharmaceutical or similar environments demonstrating familiarity with global data systems processes. Certification in AWS Cloud or other relevant data engineering or software engineering certifications showcasing advanced knowledge technical proficiency. Awareness of use case specific GenAI tools available in the market and their application in day-to-day work scenarios. Working knowledge of basic prompting techniques and commitment to continuous improvement of these skills. Ability to stay up to date with developments in AI and GenAI, applying new insights to work-related situations. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work on average a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique ambitious world. AstraZeneca is where innovation meets impact! We couple technology with an inclusive approach to cross international boundaries developing a leading ecosystem. Our diverse teams work multi-functionally at scale bringing together the best minds from across the globe uncovering new solutions. We think holistically about applying technology building partnerships inside out driving simplicity efficiencies making a real difference. Ready to make your mark? Apply now! Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Oracle Cloud Infrastructure (OCI) team offers a unique opportunity to design, build, and operate a comprehensive suite of large-scale, integrated cloud services within a broadly distributed, multi-tenant cloud environment. With a commitment to delivering exceptional cloud products, OCI empowers customers to tackle some of the world's most pressing challenges, providing tailored solutions that meet their evolving needs. Are you passionate about designing and building large-scale distributed monitoring and analytics solutions for the cloud? Do you thrive in environments that combine the agility and innovation of a startup with the resources and stability of a Fortune 100 company? As a member of our fast-growing team, you'll enjoy a high degree of autonomy, diverse challenges, and unparalleled opportunities for growth. This role offers substantial upside potential, high visibility, and accelerated career advancement. Join our team of talented individuals and tackle complex problems in distributed systems, data processing, metrics collection, data analytics, network monitoring, and multi-tenant Infrastructure-as-a-Service (IaaS) at massive scale, driving innovation and excellence in the cloud. We are seeking an experienced Principal Engineer to design and develop software, including automated test suites, for major components in our Network Monitoring & Analytics Stack. As a member of our team, you will have the opportunity to build large-scale distributed monitoring and analytics solutions for the cloud, working with a talented group of engineers to solve complex problems in distributed systems, data processing, and network monitoring. Do you thrive in a fast-paced environment, and want to be an integral part of a truly great team? Come join us! Required Qualifications: 9+ years of experience in software development 3+ years of experience in developing large scale distributed services/applications Proficiency with Java/Python/C++/Go and Object-Oriented programming Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Bachelors degree in Computer Science Desired Qualifications: Knowledge of cloud computing & networking technologies including monitoring services Networking Management Technologies such as SNMP, gNMI, protobuf, YANG Models etc Networking Technologies such as L2/L3, TCP/IP, sockets, BGP, OSPF, LLDP, ICMP etc Experience developing service-oriented systems Exposure to Kafka, Prometheus, Spark, Airflow, Flink or other open-source distributed data streaming platforms and databases Experience developing automated test suites Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Responsibilities Design and develop software for major components in our Network Monitoring & Analytics Stack Build complex distributed systems involving large amounts of data handling, including collecting metrics, building data pipelines, and analytics for real-time processing, online processing, and batch processing Develop automated test suites to ensure high-quality solutions Collaborate with cross-functional teams to deliver cloud services that meet customer needs Participate in an agile environment, contributing to the development of innovative new systems to power business-critical applications Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Machine Learning / Data Science Engineer/Data Scientists Location: Pune Experience Required: 3–6 years Type: Full-Time Education: BTech / MTech / MSc / PhD in Computer Science, Data Science, Applied Mathematics, Statistics, or a related field. About Anervea.ai Anervea.ai is building a next-generation intelligence stack for the pharmaceutical industry. Our products help commercial, clinical, and medical affairs teams make smarter decisions—faster. From predicting the success of clinical trials and decoding competitor movement, to surfacing real-time KOL signals and automating HCP engagement, our platform powers strategic decision-making at scale. We’re not a services firm—we’re a product-first, AI-native company solving real problems using applied machine learning, generative AI, and life sciences data. Our clients include major US and EU pharma companies, and our team is a mix of engineers, researchers, and life science domain experts. We’re looking for ML engineers and data scientists who are passionate about learning, driven to build usable solutions , and ready to push boundaries. Role Overview As an ML / Data Science Engineer at Anervea, you’ll work on designing, training, deploying, and maintaining machine learning models across multiple products. You’ll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. You’ll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering , model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomes—not just code Bonus Qualities Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Build the future of the AI Data Cloud. Join the Snowflake team. We are seeking a talented and motivated Analytics Engineer to join our team in Pune, India. This role will be pivotal in building and maintaining the data infrastructure that powers our cutting-edge AI applications, enabling us to deliver intelligent solutions to our customers and internal stakeholders. If you are passionate about data, AI, and working with a world-class cloud data platform, we want to hear from you. THE ROLE As an Analytics Engineer focused on AI applications, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines that feed our machine learning models and AI-driven features. You will collaborate closely with data scientists, AI researchers, software engineers, and product managers to understand data requirements and deliver high-quality data solutions. Your work will directly impact the performance and reliability of our AI systems, contributing to Snowflake's innovation in the AI space. Job Description As an Analytics Engineer supporting AI Applications, you will: Data Pipeline Development & Maintenance: Design, build, and maintain scalable, reliable ETL/ELT pipelines in Snowflake to support AI model training, evaluation, and deployment. Integrate data from various sources, including internal systems, Salesforce, and other external vendor platforms. Develop a willingness to learn B2B concepts and the intricacies of diverse data sources. Implement data quality frameworks and ensure data integrity for AI applications. System Integration & Automation: Develop and automate data processes using SQL, Python, and other relevant technologies. Work with modern data stack tools and cloud-based data platforms, with a strong emphasis on Snowflake. MLOps Understanding & Support: Gain an understanding of MLOps principles and contribute to the operationalization of machine learning models. Support data versioning, model monitoring, and feedback loops for AI systems. Release Management & Collaboration: Participate actively in frequent release and testing cycles to ensure the high-quality delivery of data features and reduce risks in production AI systems. Develop and execute QA/test strategies for data pipelines and integrations, often coordinating with cross-functional teams. Gain experience with access control systems CI/CD pipelines, and release testing methodologies to ensure secure and efficient deployments. Performance Optimization & Scalability: Monitor and optimize the performance of data pipelines and queries. Ensure data solutions are scalable to handle growing data volumes and evolving AI application needs. What You Will Need Required Skills: Bachelor's or Master's degree in Computer Science, Engineering, or a related STEM (Science, Technology, Engineering, Mathematics) field. Strong proficiency in SQL for data manipulation, querying, and optimization. Proficiency in Python for data processing, automation, and scripting. Hands-on experience with Snowflake or other cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Azure Synapse). A proactive and collaborative mindset with a strong desire to learn new technologies and B2B concepts. Preferred Skills: Experience in building and maintaining ETL/ELT pipelines for AI/ML use cases. Understanding of MLOps principles and tools. Experience with data quality frameworks and tools. Familiarity with data modeling techniques. Experience with workflow orchestration tools (e.g., Airflow, Dagster). Knowledge of software engineering best practices, including version control (e.g., Git), CI/CD, and testing. Experience coordinating QA/test strategies for cross-team integration. Familiarity with access control systems (e.g., Okta) and release testing. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description The Oracle Cloud Infrastructure (OCI) team offers a unique opportunity to design, build, and operate a comprehensive suite of large-scale, integrated cloud services within a broadly distributed, multi-tenant cloud environment. With a commitment to delivering exceptional cloud products, OCI empowers customers to tackle some of the world's most pressing challenges, providing tailored solutions that meet their evolving needs. Are you passionate about designing and building large-scale distributed monitoring and analytics solutions for the cloud? Do you thrive in environments that combine the agility and innovation of a startup with the resources and stability of a Fortune 100 company? As a member of our fast-growing team, you'll enjoy a high degree of autonomy, diverse challenges, and unparalleled opportunities for growth. This role offers substantial upside potential, high visibility, and accelerated career advancement. Join our team of talented individuals and tackle complex problems in distributed systems, data processing, metrics collection, data analytics, network monitoring, and multi-tenant Infrastructure-as-a-Service (IaaS) at massive scale, driving innovation and excellence in the cloud. We are seeking an experienced Principal Engineer to design and develop software, including automated test suites, for major components in our Network Monitoring & Analytics Stack. As a member of our team, you will have the opportunity to build large-scale distributed monitoring and analytics solutions for the cloud, working with a talented group of engineers to solve complex problems in distributed systems, data processing, and network monitoring. Do you thrive in a fast-paced environment, and want to be an integral part of a truly great team? Come join us! Required Qualifications: 9+ years of experience in software development 3+ years of experience in developing large scale distributed services/applications Proficiency with Java/Python/C++/Go and Object-Oriented programming Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Bachelors degree in Computer Science Desired Qualifications: Knowledge of cloud computing & networking technologies including monitoring services Networking Management Technologies such as SNMP, gNMI, protobuf, YANG Models etc Networking Technologies such as L2/L3, TCP/IP, sockets, BGP, OSPF, LLDP, ICMP etc Experience developing service-oriented systems Exposure to Kafka, Prometheus, Spark, Airflow, Flink or other open-source distributed data streaming platforms and databases Experience developing automated test suites Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Responsibilities Design and develop software for major components in our Network Monitoring & Analytics Stack Build complex distributed systems involving large amounts of data handling, including collecting metrics, building data pipelines, and analytics for real-time processing, online processing, and batch processing Develop automated test suites to ensure high-quality solutions Collaborate with cross-functional teams to deliver cloud services that meet customer needs Participate in an agile environment, contributing to the development of innovative new systems to power business-critical applications Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

1.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving. Market Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. Market Risk Engineering has an opportunity for an Associate level Software Engineer to work across a broad range of applications and extremely diverse set of technologies to keep the suite operating at peak efficiency. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 1-2 years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, national origin, age, veterans status, disability, or any other characteristic protected by applicable law. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description The Oracle Cloud Infrastructure (OCI) team offers a unique opportunity to design, build, and operate a comprehensive suite of large-scale, integrated cloud services within a broadly distributed, multi-tenant cloud environment. With a commitment to delivering exceptional cloud products, OCI empowers customers to tackle some of the world's most pressing challenges, providing tailored solutions that meet their evolving needs. Are you passionate about designing and building large-scale distributed monitoring and analytics solutions for the cloud? Do you thrive in environments that combine the agility and innovation of a startup with the resources and stability of a Fortune 100 company? As a member of our fast-growing team, you'll enjoy a high degree of autonomy, diverse challenges, and unparalleled opportunities for growth. This role offers substantial upside potential, high visibility, and accelerated career advancement. Join our team of talented individuals and tackle complex problems in distributed systems, data processing, metrics collection, data analytics, network monitoring, and multi-tenant Infrastructure-as-a-Service (IaaS) at massive scale, driving innovation and excellence in the cloud. We are seeking an experienced Principal Engineer to design and develop software, including automated test suites, for major components in our Network Monitoring & Analytics Stack. As a member of our team, you will have the opportunity to build large-scale distributed monitoring and analytics solutions for the cloud, working with a talented group of engineers to solve complex problems in distributed systems, data processing, and network monitoring. Do you thrive in a fast-paced environment, and want to be an integral part of a truly great team? Come join us! Required Qualifications: 9+ years of experience in software development 3+ years of experience in developing large scale distributed services/applications Proficiency with Java/Python/C++/Go and Object-Oriented programming Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Bachelors degree in Computer Science Desired Qualifications: Knowledge of cloud computing & networking technologies including monitoring services Networking Management Technologies such as SNMP, gNMI, protobuf, YANG Models etc Networking Technologies such as L2/L3, TCP/IP, sockets, BGP, OSPF, LLDP, ICMP etc Experience developing service-oriented systems Exposure to Kafka, Prometheus, Spark, Airflow, Flink or other open-source distributed data streaming platforms and databases Experience developing automated test suites Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Responsibilities Design and develop software for major components in our Network Monitoring & Analytics Stack Build complex distributed systems involving large amounts of data handling, including collecting metrics, building data pipelines, and analytics for real-time processing, online processing, and batch processing Develop automated test suites to ensure high-quality solutions Collaborate with cross-functional teams to deliver cloud services that meet customer needs Participate in an agile environment, contributing to the development of innovative new systems to power business-critical applications Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

THE ROLE Zeta Marketing Platform (ZMP) is a machine learning/AI powered customer acquisition and CRM multi-tenant platform. The Sr backend developer will work on server-side APIs and services that enable a highly distributed event pipeline and a stack that gets tens of thousands of messages per second. As a Senior Software Engineer you will join provide technical leadership to the group responsible for architecting, developing, and owning the Zeta Marketing Platform. You will collaborate with Engineers, Product Managers and Executives across the organization to develop a roadmap and subsequent projects to build the next generation comprehensive, multichannel marketing solution that unifies and unlocks data across digital touch points, driving return on marketing investment. As a Senior Software Developer, you will be: Responsible for independently and cooperatively understanding business requirements, designing, and implementing core components for real-world marketing automation platform. Designing, implementing application code to satisfy product requirements Ensuring high product quality through rigorous code reviews and unit tests Fixing bugs and implementing enhancements Taking ownership of a significant product component in design and implementation Your Impact: We are looking for exceptional talent with superior academic credentials and a solid foundation in computer sciences and distributed systems design and development. The candidate will have had at least 6 years of experience developing scalable, robust software platforms using Java/Ruby/Python or an equivalent language. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. Strong communication skills in a large-distributed development team environment are essential. Experience in Advertising Attribution domain is a plus Requirements & Qualifications: BS or MS in Computer Science or related field 8 -12 years of working experience with J2EE technology, Python or equivalent OO paradigm Strong knowledge and experience with Kafka, Elastic Search, Airflow, NoSQL databases such as Aerospike, Thrift, CI, and AWS. Experience with SQL languages Experience working with container-based solutions is a plus. Experience working in a fast-paced technology environment. Strong object-oriented programming and design skills. Excellent problem solving, critical thinking, and communication skills. Ability and desire to learn new skills and take on new tasks. BENEFITS & PERKS Unlimited PTO Excellent medical, dental, and vision coverage Employee Equity and Stock Purchase Plan Employee Discounts, Virtual Wellness Classes, and Pet Insurance And more!!

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

🚀 Now Hiring: Senior Data Engineers – Azure & DBT/Snowflake 📍 Location: Chennai (Hybrid) 🕒 Experience: 5+ Years 🔢 Openings: 4 📌 Employment Type: Full-time About the Role We’re looking for an experienced Senior Data Engineer who’s passionate about building modern data platforms and solving complex data challenges. You’ll work across cloud technologies like Azure, Snowflake, DBT, and Databricks to architect, develop, and scale our data infrastructure. If you’re excited about optimizing data pipelines, enabling better analytics, and shaping data architecture — this role is for you! What You’ll Do ✅ Design and build scalable data pipelines using Azure Data Factory, Databricks, and Snowflake ✅ Develop modular and reusable data models with DBT ✅ Transform and load data from diverse sources into cloud-based data lakes and warehouses ✅ Ensure data accuracy, security, and performance at every step ✅ Collaborate with data analysts, product teams, and business stakeholders ✅ Maintain clean code with unit tests and clear documentation ✅ Drive automation, CI/CD, and process improvement initiatives What We’re Looking For 🔹 Strong hands-on experience with Azure Data Services (Data Lake, ADF, Synapse, Databricks) 🔹 Proficient in DBT and Snowflake for modeling and transformation 🔹 Advanced SQL and Python skills 🔹 Understanding of data warehouse concepts (dimensional modeling, SCD, CDC) 🔹 Familiar with Airflow, Fivetran, Glue, or similar tools 🔹 Comfortable working in Agile environments and cross-functional teams 🔹 Great communication skills and a proactive mindset #DataEngineer #Azure #Snowflake #DBT #Databricks #Hiring #TechJobs #DataJobs #RemoteJobs #SQL #Python Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are seeking a talented Engineering Manager with ML Ops expertise to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. As an Engineering Manager, you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing? You will lead the charge in ensuring operational efficiency and delivering high-value solutions . You’ll mentor and develop a high-performing team of Big Data and MLOps engineers, driving best practices in software development, data management, and model deployment. With a focus on robust technical design, you’ll ensure solutions are secure, scalable, and efficient. Your role will involve hands-on development to tackle complex challenges, collaborating across teams to define requirements, and delivering innovative solutions. You’ll keep stakeholders and senior management informed on progress, risks, and opportunities while staying ahead of advancements in AI/ML technologies and driving their application. With an agile mindset, you will overcome challenges and deliver impactful solutions that make a difference. Technical Expertise Proven experience in microservices architecture, with hands-on knowledge of Docker and Kubernetes for orchestration. Proficiency in ML Ops and Machine Learning workflows using tools like Spark. Strong command of SQL and PySpark programming. Expertise in Big Data solutions such as Spark and Hive, with advanced Spark optimizations and tuning skills. Hands-on experience with Big Data orchestrators like Airflow. Proficiency in Python programming, particularly with frameworks like FastAPI or equivalent API development tools. Experience in unit testing, code quality assurance, and the use of Git or other version control systems. Cloud And Infrastructure Practical knowledge of cloud-based data stores, such as Redshift and BigQuery (preferred). Experience in cloud solution architecture, especially with GCP and Azure. Familiarity with GitLab CI/CD pipelines is a bonus. Monitoring And Scalability Solid understanding of logging, monitoring, and alerting systems for production-level big data pipelines. Prior experience with scalable architectures and distributed processing frameworks. Soft Skills And Additional Plus Points A collaborative approach to working within cross-functional teams. Ability to troubleshoot complex systems and provide innovative solutions. Familiarity with GitLab for CI/CD and infrastructure automation tools is an added advantage. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies