Jobs
Interviews

5801 Airflow Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 6 days ago

Apply

0.0 - 6.0 years

15 - 18 Lacs

Indore, Madhya Pradesh

On-site

Location: Indore Experience: 6+ Years Work Type : Hybrid Notice Period : 0-30 Days joiners We are hiring for a Digital Transformation Consulting firm that specializes in the Advisory and implementation of AI, Automation, and Analytics strategies for the Healthcare providers. The company is headquartered in NJ, USA and its India office is in Indore, MP. Job Description: We are seeking a highly skilled Tech Lead with expertise in database management, data warehousing, and ETL pipelines to drive the data initiatives in the company. The ideal candidate will lead a team of developers, architects, and data engineers to design, develop, and optimize data solutions. This role requires hands-on experience in database technologies, data modeling, ETL processes, and cloud-based data platforms. Key Responsibilities: Lead the design, development, and maintenance of scalable database, data warehouse, and ETL solutions. Define best practices for data architecture, modeling, and governance. Oversee data integration, transformation, and migration strategies. Ensure high availability, performance tuning, and optimization of databases and ETL pipelines. Implement data security, compliance, and backup strategies. Required Skills & Qualifications: 6+ years of experience in database and data engineering roles. Strong expertise in SQL, NoSQL, and relational database management systems (RDBMS). Hands-on experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery). Deep understanding of ETL tools and frameworks (e.g., Apache Airflow, Talend, Informatica). Experience with cloud data platforms (AWS, Azure, GCP). Proficiency in programming/scripting languages (Python, SQL, Shell scripting). Strong problem-solving, leadership, and communication skills. Preferred Skills (Good to Have): Experience with big data technologies (Hadoop, Spark, Kafka). Knowledge of real-time data processing. Exposure to AI/ML technologies and working with ML algorithms Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Schedule: Day shift Application Question(s): We must fill this position urgently. Can you start immediately? Have you held a lead role in the past? Experience: Extract, Transform, Load (ETL): 6 years (Required) Python: 5 years (Required) big data technologies (Hadoop, Spark, Kafka): 6 years (Required) Snowflake: 6 years (Required) Data warehouse: 6 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person

Posted 6 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Cloud Architect with DevOps Location: Noida (Hybrid) Job Type: Full-Time | Permanent Experience: 7+ years Job Summary: We are seeking an experienced Cloud Architect with strong DevOps expertise to lead and support our cloud transformation journey. The ideal candidate will be responsible for designing scalable and secure cloud architectures, driving cloud migration from traditional ETL tools (e.g., IBM DataStage) to modern cloud-native solutions, and enabling DevOps automation and best practices. The candidate must also have strong hands-on experience with Spark and Snowflake , along with a strong background in optimizing cloud performance and addressing cloud security vulnerabilities. Key Responsibilities: Design and implement scalable, secure, and high-performance cloud architectures in AWS, Azure, or GCP. Lead the cloud migration of ETL workloads from IBM DataStage to cloud-native or Spark-based pipelines. Architect and maintain Snowflake data warehouse solutions , ensuring high performance and cost optimization. Implement DevOps best practices , including CI/CD pipelines, infrastructure as code (IaC), monitoring, and logging. Drive automation and operational efficiency across build, deployment, and environment provisioning processes. Proactively identify and remediate cloud security vulnerabilities , ensuring compliance with industry best practices. Collaborate with cross-functional teams including data engineers, application developers, and cybersecurity teams. Provide architectural guidance on Spark-based big data processing pipelines in cloud environments. Support troubleshooting, performance tuning, and optimization across platforms and tools. Required Qualifications: 7+ years of experience in cloud architecture, DevOps engineering, and data platform modernization. Strong expertise in AWS, Azure, or GCP cloud platforms. Proficient in Apache Spark for large-scale data processing. Hands-on experience with Snowflake architecture, performance tuning, and data governance. Deep knowledge of DevOps tools : Terraform, Jenkins, Git, Docker, Kubernetes, Ansible, etc. Experience with cloud migration , especially from legacy ETL tools like IBM DataStage. Strong scripting and automation skills in Python, Bash, or PowerShell . Good understanding of networking, cloud security , IAM, VPCs, and compliance standards. Experience implementing CI/CD pipelines , observability, and incident response in cloud environments. Preferred Qualifications: Certification in one or more cloud platforms (e.g., AWS Solutions Architect, Azure Architect). Experience with data lake and lakehouse architectures . Familiarity with modern data orchestration tools like Airflow, DBT, or Glue. Working knowledge of Agile methodologies and DevOps culture. Familiarity with cost management and optimization in cloud deployments.

Posted 6 days ago

Apply

3.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Transforming the Future of Enterprise Planning At o9, our mission is to be the Most Value-Creating Platform for enterprises by transforming decision-making through our AI-first approach. By integrating siloed planning capabilities and capturing millions—even billions—in value leakage, we help businesses plan smarter and faster. This not only enhances operational efficiency but also reduces waste, leading to better outcomes for both businesses and the planet. Global leaders like Google, PepsiCo, Walmart, T-Mobile, AB InBev, and Starbucks trust o9 to optimize their supply chains. Job Title: DevOps Engineer - AI, R&D Location: Bengaluru, Karnataka, India (hybrid) About o9 Solutions: o9 Solutions is a leading enterprise AI software platform provider for transforming planning and decision-making capabilities. We are building the next generation of AI-powered solutions to help businesses optimize their operations and drive innovation. Our Next-Gen AI R&D team is at the forefront of this mission, pushing the boundaries of what's possible with artificial intelligence. About the Role... We are looking for a highly skilled and motivated DevOps Engineer to join our Next-Gen AI R&D team. In this role, you will be instrumental in developing and implementing MLOps strategies for Generative AI models, designing and managing CI/CD pipelines for ML workflows, and ensuring the robustness, scalability, and reliability of our AI solutions in production environments. What you will do in this role: Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. Design and manage CI/CD pipelines specialized for ML workflows, including the deployment of generative models such as GANs, VAEs, and Transformers. Monitor and optimize the performance of AI models in production, employing tools and techniques for continuous validation, retraining, and A/B testing. Collaborate with data scientists and ML researchers to understand model requirements and translate them into scalable operational frameworks. Implement best practices for version control, containerization, infrastructure automation, and orchestration using industry-standard tools (e.g., Docker, Kubernetes). Ensure compliance with data privacy regulations and company policies during model deployment and operation. Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. Stay up-to-date with the latest developments in MLOps and Generative AI, bringing innovative solutions to enhance our AI capabilities. What you'll have.. Must Have: Minimum 3 years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing proofs of concept and prototypes. Strong background in software development, with experience in building and maintaining scalable, distributed systems. Strong programming skills in languages like Python and familiarity in ML frameworks and libraries (e.g., TensorFlow, PyTorch). Knowledge of containerization and orchestration tools like Docker and Kubernetes. Proficiency with MLOps tools such as MLflow, Kubeflow, Airflow, or similar for managing machine learning workflows and lifecycle. Practical understanding of generative AI frameworks (e.g., HuggingFace Transformers, OpenAI GPT, DALL-E). Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes for scalable model deployment. Expertise in MLOps and LLMOps practices, including CI/CD for ML models. Nice to Have: Familiarity with cloud platforms (AWS, GCP, Azure) and their ML/AI service offerings. Experience with continuous integration and delivery tools such as Jenkins, GitLab CI/CD, or CircleCI. Experience with infrastructure as code tools like Terraform or CloudFormation. Experience with advanced GenAI applications such as natural language generation, image synthesis, and creative AI. Familiarity with experiment tracking and model registry tools. Knowledge of high-performance computing and parallel processing techniques. Contributions to open-source MLOps or GenAI projects. What we’ll do for you: Flat organization: With a very strong entrepreneurial culture (and no corporate politics). Great people and unlimited fun at work. Possibility to really make a difference in a scale-up environment. Support network: Work with a team you can learn from every day. Diversity: We pride ourselves on our international working environment. AI is firmly on every CEO's agenda, o9 @ Davos & Reflections: https://o9solutions.com/articles/why-ai-is-topping-the-ceo-agenda/ Work-Life Balance: https://youtu.be/IHSZeUPATBA?feature=shared Feel part of A team: https://youtu.be/QbjtgaCyhes?feature=shared How the process works... Respond with your interest to us. We’ll contact you either via video call or phone call - whatever you prefer, with the further schedule status. During the interview phase, you will meet with the technical panel for 60 minutes. We will contact you after the interview to let you know if we’d like to progress your application. There will be 2-3 rounds of technical discussion followed by a Managerial round. We will let you know if you’re the successful candidate. Good luck! Why Join o9 Solutions? At o9, you'll be at the forefront of AI innovation, working with a dynamic team that's shaping the future of enterprise solutions. We offer a stimulating and rewarding environment where your contributions directly impact cutting-edge projects. You'll gain invaluable experience with the latest AI technologies and significantly grow your skills. Join us and be a key player in building the next generation of intelligent solutions that truly transform businesses! More about us… At o9, transparency and open communication are at the core of our culture. Collaboration thrives across all levels—hierarchy, distance, or function never limit innovation or teamwork. Beyond work, we encourage volunteering opportunities, social impact initiatives, and diverse cultural celebrations. With a $3.7 billion valuation and a global presence across Dallas, Amsterdam, Barcelona, Madrid, London, Paris, Tokyo, Seoul, and Munich, o9 is among the fastest-growing technology companies in the world. Through our aim10x vision, we are committed to AI-powered management, driving 10x improvements in enterprise decision-making. Our Enterprise Knowledge Graph enables businesses to anticipate risks, adapt to market shifts, and gain real-time visibility. By automating millions of decisions and reducing manual interventions by up to 90%, we empower enterprises to drive profitable growth, reduce inefficiencies, and create lasting value. o9 is an equal-opportunity employer that values diversity and inclusion. We welcome applicants from all backgrounds, ensuring a fair and unbiased hiring process. Join us as we continue our growth journey!

Posted 6 days ago

Apply

3.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

About The Company Veersa is a healthtech company that leverages emerging technology and data science to solve business problems in the US healthcare industry. Veersa has established a niche in serving small and medium entities in the US healthcare space through its tech frameworks, platforms, and tech accelerators. Veersa is known for providing innovative solutions using technology and data science to its client base and is the preferred innovation partner to its clients. Veersa's rich technology expertise manifests in the various tech accelerators and frameworks developed in-house to assist in rapid solutions delivery and implementations. Its end-to-end data ingestion, curation, transformation, and augmentation framework has helped several clients quickly derive business insights and monetize data assets. Veersa teams work across all emerging technology areas such as AI/ML, IoT, and Blockchain and using tech stacks as MEAN, MERN, PYTHON, GoLang, ROR, and backend such as Java Springboot, NodeJs, and using databases as PostgreSQL, MS SQL, MySQL, Oracle on AWS and Azure cloud using serverless architecture. Veersa has two major business lines - Veersalabs: an In-house R&D and product development platform and Veersa tech consulting: Technical solutions delivery for clients. Veersa's customer base includes large US Healthcare software vendors, Pharmacy chains, Payers, providers, and Hospital chains. Though Veersa's focus geography is North America, Veersa also provides product engineering expertise to a few clients in Australia and Singapore. About The Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud. Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. We are specifically looking for someone with strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow. Build and automate data workflows and orchestration pipelines; knowledge of Airflow or equivalent tools is a plus. Write and maintain Bash scripts for automating system tasks and managing data jobs. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications Bachelor's or Master's degree in Computer Science, Information Systems, or related field. 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proficiency in SQL and Python for data processing and pipeline development. Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash. Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making. (ref:hirist.tech)

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Data Engineer - GenAI Platform & Data Engineering About The Role We are seeking an experienced Data Engineer with deep hands-on expertise in AWS, Azure Databricks, Snowflake, and modern data engineering practices to join our growing Data & AI Engineering team. The ideal candidate is a strategic thinker who can design scalable platforms, drive robust data solutions, and support high-impact AI/GenAI projects from the ground up. Key Responsibilities Working experience of 3 years in Data engineering Design, build, and optimize scalable data pipelines using modern frameworks and orchestration tools. Develop and maintain ETL/ELT workflows using AWS, Azure Databricks, Airflow, and Azure Data Factory. Manage and model data in Snowflake to support advanced analytics and machine learning use cases. Collaborate with analytics, product, and engineering teams to align data solutions with business goals. Ensure high standards for data quality, governance, and pipeline performance. Mentor junior engineers and help lead a high-performing data and platform engineering team. Lead and support GenAI platform initiatives, including building reusable libraries, integrating vector databases, and developing LLM-based pipelines. Build components of agentic frameworks using Python, Spring AI, and deploy them on AWS EKS. Establish and manage CI/CD pipelines using Jenkins. Drive ML Ops and model deployment workflows to ensure reliable and scalable AI solution delivery. Required Qualifications Proven hands-on experience with Azure Databricks, Snowflake, Airflow, and Python. Strong proficiency in SQL, Spark, Spark Streaming, and modern data orchestration frameworks. Solid understanding of data modeling, ETL best practices, and performance optimization. Experience in cloud-native environments (AWS and/or Azure). Strong hands-on expertise in AWS EKS, CI/CD (Jenkins), and ML Ops/model deployment workflows. Ability to lead, mentor, and collaborate effectively across cross-functional teams. Preferred Qualifications Experience with Search Platforms such as Elasticsearch, SOLR, OpenSearch, or Vespa. Familiarity with Spring Boot microservices and EKS-based deployments. Background in Recommender Systems, with leadership roles in AI/ML projects. Expertise in GenAI platform engineering, including LLMs, RAG architecture, Vector Databases, and agentic design. Proficiency in Python, Java, Spring AI, and enterprise-grade software development. Ability to build platform-level solutions with a focus on reusability, runtime libraries, and scalability. What We Offer A unique opportunity to build and scale cutting-edge AI and data platforms that drive meaningful business outcomes. A collaborative, growth-oriented work culture with room for ownership and innovation. Competitive compensation and a comprehensive benefits package. Flexible hybrid/remote work model to support work-life balance. (ref:hirist.tech)

Posted 6 days ago

Apply

1.0 - 31.0 years

2 - 2 Lacs

Bengaluru/Bangalore

On-site

Job Description: Installation of HVAC systems and components according to blueprints and specifications. Connecting electrical wiring and components. Ensuring proper system sizing and airflow. Performing routine maintenance on HVAC systems, including cleaning, lubricating, and replacing parts. Checking and adjusting system controls for optimal performance. Inspecting systems for potential issues and recommending repairs. Diagnosing and troubleshooting problems with HVAC systems. Repairing or replacing faulty components, such as compressors, motors, and control boards. Communicating with customers about system issues and recommended solutions. Following safety procedures and guidelines & records maintenance. Staying up-to-date on new technologies and regulations. Technical knowledge of HVAC systems and components. Troubleshooting and problem-solving skills. Physical ability to perform tasks such as lifting, bending, and climbing. Good communication and customer service skills.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Role We are seeking a highly experienced and strategic AI/ML Architect to lead the design, development, and deployment of scalable artificial intelligence and machine learning solutions. As a core member of our technical leadership team, you will play a pivotal role in building intelligent systems that drive innovation and transform digital healthcare delivery across our AI Driven Telemedicine platform. Key Responsibilities Architect AI/ML systems that support key business goals, from real-time diagnosis and predictive analytics to natural language conversations and recommendation engines. Design and oversee machine learning pipelines, model training, validation, deployment, and performance monitoring. Guide selection of ML frameworks (e.g., TensorFlow, PyTorch) and ensure proper MLOps practices (CI/CD, model versioning, reproducibility, drift detection). Collaborate with cross-functional teams (data engineers, product managers, UI/UX, backend developers) to integrate AI into real-time applications and APIs. Build and maintain scalable AI infrastructure, including data ingestion, storage, and processing layers in cloud environments (AWS, GCP, or Azure). Lead research and experimentation on generative AI, NLP, computer vision, and deep learning techniques relevant to healthcare use cases. Define data strategies, governance, and model explainability/ethics frameworks to ensure compliance with regulatory standards like HIPAA. Mentor and lead a growing team of ML engineers and data scientists. Qualifications Must-Have: Bachelor's or Master’s degree in Computer Science, AI, Data Science, or related field (PhD preferred). 7+ years of experience in AI/ML development, with at least 2 years in a lead or architect role. Proven experience designing and deploying production-grade ML systems at scale. Strong grasp of ML algorithms, deep learning, NLP, computer vision, and generative AI. Expertise in Python, ML libraries (TensorFlow, PyTorch, Scikit-learn), and ML Ops tools (MLflow, Kubeflow, SageMaker, etc.). Familiarity with data engineering pipelines (Airflow, Spark, Kafka) and cloud platforms. Strong communication and collaboration skills. Preferred: Experience with healthcare data standards (FHIR, HL7) and medical ontologies (SNOMED CT, ICD). Familiarity with AI ethics, fairness, and interpretability frameworks. Startup or early-stage product development experience.

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Databricks Engineer specializing in the Azure Data Platform, you will be responsible for designing, developing, and optimizing scalable data pipelines within the Azure ecosystem. You should have hands-on experience with Python-based ETL development, Lakehouse architecture, and building Databricks workflows utilizing the bronze-silver-gold data modeling approach. Your key responsibilities will include developing and maintaining ETL pipelines using Python and Apache Spark in Azure Databricks, implementing and managing bronze-silver-gold data lake layers using Delta Lake, and working with various Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for end-to-end pipeline orchestration. It will be crucial to ensure data quality, integrity, and lineage across all layers of the data pipeline, optimize Spark performance, manage cluster configurations, and schedule jobs effectively in Databricks. Collaboration with data analysts, architects, and business stakeholders to deliver data-driven solutions will also be part of your role. To be successful in this role, you should have at least 3+ years of experience with Python in a data engineering environment, 2+ years of hands-on experience with Azure Databricks and Apache Spark, and a strong background in building scalable data lake pipelines following the bronze-silver-gold architecture. Additionally, in-depth knowledge of Delta Lake, Parquet, and data versioning, along with familiarity with Azure Data Factory, ADLS Gen2, and SQL is required. Experience with CI/CD pipelines and job orchestration tools such as Azure DevOps or Airflow would be advantageous. Excellent communication skills, both verbal and written, are essential. Nice to have qualifications include experience with data governance, security, and monitoring in Azure, exposure to real-time streaming or event-driven pipelines (Kafka, Event Hub), and an understanding of MLflow, Unity Catalog, or other data cataloging tools. By joining our team, you will have the opportunity to be part of high-impact, cloud-native data initiatives, work in a collaborative and growth-oriented team focused on innovation, and contribute to modern data architecture standards using the latest Azure technologies. If you are ready to advance your career as a Databricks Engineer in the Azure Data Platform, please send your updated resume to hr@vidhema.com. We look forward to hearing from you and potentially welcoming you to our team.,

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Hi, we’re TechnologyAdvice. At TechnologyAdvice, we pride ourselves on helping B2B tech buyers manage the complexity and risk of the buying process. We are a trusted source of information for tech buyers, delivering advice and facilitating connections between our buyers and the world’s leading sellers of business technology. Headquartered in Nashville, Tennessee, we are a remote-first company with more than 20 digital publications and over 500 global team members in the US, UK, Singapore, Australia, and the Philippines. We’re proud to have been repeatedly recognized as one of America’s fastest growing private companies by Inc., as well as a Tennessee top workplace. We work hard each day and have fun, too, with monthly virtual events, recreational slack channels, and the occasional costumed dance from our CEO. All positions are open to remote work unless otherwise specified in the requirements below. The opportunity As an Analytics Engineer and data modeler within the Business Intelligence team at TechnologyAdvice, you will transform source data into standardized reporting assets to improve business performance and help connect technology buyers and sellers. You will architect source-of-truth data schemas to support business intelligence and enable data-led opportunities. You will create and maintain semantic layers within reporting workflows, driving accuracy and consistency in how business logic is applied. You will work with business intelligence and data science to ensure adoption of standardized reporting tables. You will build production data products that serve as building blocks for predictive models and customer-facing experiences. You will address data quality issues to improve accuracy and increase transparency around upstream failures. You will develop governed production workflows to ensure stability and oversight in reporting processes. You will engineer logical, usable data models to support reporting self-service and adapt to continuously evolving data sources. Success in this role requires the ability to partner effectively with internal stakeholders and develop a deep understanding of the data used to measure and optimize business performance. A positive attitude, attention to detail, and the ability to adapt to changing priorities are essential. If you’re looking for a role where your contributions make a difference and your ideas are welcomed, we want to hear from you. Location: India What You'll Do Own the full lifecycle of data model development, including ideation, prototyping, implementation, refactoring, and deprecation of outdated assets. Develop and maintain semantic data models that serve as the source-of-truth for data customers across the organization. Build common dimension tables to support enterprise reporting use cases and improve data model consistency and maintainability. Document and translate business requirements into data complex models that cover enterprise reporting needs, including marketing attribution and revenue recognition. Standardize data nomenclature and data type conventions and transform legacy data objects to standardized models. Partner with engineering, business intelligence, data science, and other teams to ensure alignment on development priorities and data solutions. Build workflows that maximize the efficiency of data processes while maintaining high standards of data quality, data usability, and performance. Adhere to best practices related to metadata management and metadata reporting. Develop subject matter expertise in specific business areas and data domains, and help educate customers regarding the correct utilization of data objects. Build and maintain production data products that serve as building blocks for business intelligence reporting, predictive data models, and product-led development initiatives. Create and maintain data lineage documentation to improve transparency and auditability of data transformations and dependencies. Implement automated data validation and testing frameworks to ensure data model integrity and trustworthiness. Manage quality assurance workstreams and drive adoption of appropriate incident management frameworks for enterprise reporting. Partner with data engineering to optimize data transformations and scheduled procedures for cost, performance, and reporting schedules. Work directly with business intelligence analysts to enforce the adoption of relevant data models and capture reporting requirements for data model development. Partner with upstream data owners to identify opportunities to improve downstream reporting capabilities, reduce model complexity, and increase data coverage. Participate in agile development processes, including sprint planning, retrospectives, and iterative delivery of data products. Understand stakeholder business objectives and how data and analytics solutions can help internal customers meet their goals. Identify opportunities for data acquisition or data integration projects to improve the value of enterprise data assets. Who You Are Bachelor's or Master's degree in a relevant field such as Computer Science, Information Systems, Data Science or a related discipline. 4-6 years of experience in data engineering, analytics engineering, data modeling, data architecture or data science, preferably in a digital business. Understanding of best practices for designing modular and reusable data structures (e.g. star and snowflake schemas) and implementing conceptual and logical data models Advanced SQL techniques for data transformation, querying, and optimization. Experience working within cloud-based data environments such as Snowflake, Redshift, or BigQuery and managing database procedures and functions. Knowledge of data transformation frameworks and data lineage best practices. Experience building, maintaining, and optimizing ETL/ELT pipelines, using modern tools like dbt, Dagster, Airflow, or similar. Familiarity with version control, CI/CD, and modern development workflows. Experience applying AI to improve work quality and the efficiency of the data model development process. Ability to collaborate cross-functionally with data analysts, engineers, and business stakeholders to understand data needs and translate them into scalable models Knowledge of data governance principles, data quality standards, and regulatory compliance (e.g., GDPR, CCPA) is a plus. Expertise in scripting and automation with experience in object-oriented programming and building scalable frameworks is a plus. Experience building production dashboards using tools such as Tableau, Power BI, or Looker is a plus. Strong attention to detail and a passion for staying updated with industry trends and emerging data management and data transformation technologies. Agile professional who excels in a fast-paced environment and thrives on continuously pivoting strategies to drive business needs forward Please note that, as this is a contract position, no perks or benefits are included with this role. Work authorization Employer work visa sponsorship and support are not provided for this role. Applicants must be currently authorized to work in India at hire and must maintain authorization to work in India throughout their employment with our company. Salary Range We seek to hire top-tier individuals and intend for our compensation to be at a rate that allows us to recruit and retain individuals who align with our core values, purpose, mission, and vision. Final total compensation is based on a multitude of factors including, but not limited to, skill level, relevant experience to the position, and cost of labor. Hourly pay range ₹1,600—₹2,500 INR EOE statement We believe that our differences make us stronger, and thus foster a diverse and inclusive culture where people feel safe being themselves. TechnologyAdvice is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected under federal, state or local law. Pre-employment screening required. TechnologyAdvice does not engage with external staffing agencies. Any candidates introduced by such firms will not be eligible for compensation. Any AI-generated or incomplete application answers will be auto-rejected.

Posted 6 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hi All, We are hiring for Data Engineers, kindly refer the below skillsets: Mandatory Skills: GCP Cloud (especially BigQuery and DataProc) Big Data technologies Hadoop Hive Python / PySpark Airflow and DAG orchestration Preferred Skills: Experience with visualization tools such as Tableau or Power BI Familiarity with Jethro is a plus Show more Show less

Posted 6 days ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a collaborative team of engineers and product managers, working alongside technology and business partners to support data initiatives that contribute to bps digital transformation and platform capabilities. Let me tell you about the role As a Data Visualization Platform Engineer, you will support the development, integration, and security of data platforms that power enterprise applications. You will work closely with engineers and architects to help maintain performance, resilience, and compliance across bps cloud and data ecosystems. This role is a great opportunity to grow your platform engineering skills while contributing to real-world solutions. What you will deliver Assist in platform engineering activities including configuration, integration, and maintenance of enterprise data systems. Support CI/CD implementation and Infrastructure-as-Code adoption to improve consistency and efficiency. Help monitor and improve platform performance, availability, and reliability. Collaborate on basic security operations, including monitoring, identity access controls, and remediation activities. Participate in the delivery of data pipelines and platform features across cloud environments. Contribute to documentation, testing, and process improvements across platform workflows. Work with teammates to ensure data systems meet compliance, governance, and security expectations! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelors degree in technology, engineering, or a related fieldor equivalent hands-on experience. 24 years of experience in IT or platform/data engineering roles. Familiarity with CI/CD tools and Infrastructure-as-Code (e.g., Terraform, Azure Bicep, or AWS CDK). Basic experience with Python, Java, or Scala for scripting or automation. Exposure to data pipeline frameworks (e.g., Airflow, Spark, Kafka) and cloud platforms (AWS, Azure, or GCP). Understanding of data modeling, data lakes, SQL/NoSQL databases, and cloud-native tools. Ability to work collaboratively with cross-functional teams and follow structured engineering practices. Essential Skills Proven technical expertise in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data pipelines, ingestion, and transformation workflows. Awareness of platform security fundamentals and data governance principles. Familiarity with data visualization concepts and tools (e.g., Power BI, Tableau, or similar). Exposure to distributed systems and working with real-time or batch data processing frameworks. Willingness to learn and adapt to evolving technologies in data engineering and platform operations. Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management + 4 more Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bps recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

AI Architecture Lead Workflow Enablement Modular Tools and Infrastructure Location: Remote (Southeast Asia preferred) Reports to: Director of Development, Singapore Language: English (professional fluency required) Note: We are not able to offer visa sponsorship or relocation Job Description Help Shape an Adaptive System of AI Tools That Supports Real Work Were looking for someone who sees beyond individual tools; someone who can help shape how a growing collection of focused, AI-enabled utilities starts to work together across our global teams. This is regarding the enabling of practical solutions to real challenges each small, valuable, and human-centered. Over time, your work will guide how these pieces start to connect, align, and support one another in more powerful ways. Youll be joining a multidisciplinary environment where workplace strategy, design, engineering, and delivery teams are already experimenting with generative AI. Your role is to bring structure and foresight helping ensure these experiments evolve into a flexible, future-ready system. What Youll Work On Designing modular back-end logic that supports prompt-to-geometry, prompt-to-calculation, or prompt-to-output workflows Connecting GPT-style inputs with structured data, layout tools, and logic components in ways that scale Working across design, engineering, and project teams to ensure tools serve real-world delivery Identifying patterns, promoting reusability, and shaping shared foundations for internal AI tools Supporting integration of new tools without creating complexity or duplication What You Bring A platform mindset, grounded in realitynot everything needs to be built at once Experience designing systems that support modular logic, prompt interaction, and cross-functional use Comfort working with APIs, structured data formats, and lightweight service layers A strong sense of how generative tools are changing the way software, design, and engineering interact A clear communication style and collaborative working approach Technologies You May Use Python, TypeScript, or similar environments OpenAI API, LangChain, vector databases, RAG pipelines JSON, YAML, REST APIs Airflow, GitHub, cloud services (GCP, Azure, etc.) Tools that help define workflows, structure logic, and promote reusability Why This Role Matters Our teams are already building AI tools that serve specific needs. Whats missing is the connective layer and the thinking that allows these tools to speak the same language, use shared logic, and scale responsibly. Youll help guide that evolution. Not by dictating a fixed system, but by enabling a distributed one to take shape with enough structure to be coherent, and enough flexibility to adapt as we grow. Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Python Developer AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain. Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Join our dynamic and high-impact Data team as a Data Engineer, where you&aposll be responsible for safely receiving and storing trading-related data for the India teams, as well as operating and improving our shared data access and data processing systems. This is a critical role in the organisation as the data platform drives a huge range of trader analysis, simulation, reporting and insights. The ideal candidate should have work experience in systems engineering, preferably with prior exposure to financial markets and with proven working knowledge in the fields of Linux administration, orchestration and automation tools, systems hardware architecture as well as storage and data protection technologies. Your Core Responsibilities: Manage and monitor all distributed systems, storage infrastructure, and data processing platforms, including HDFS, Kubernetes, Dremio, and in-house data pipelines Drive heavy focus on systems automation and CI/CD to enable rapid deployment of hardware and software solutions Collaborate closely with systems and network engineers, traders, and developers to support and troubleshoot their queries Stay up to date with the latest technology trends in the industry; propose, evaluate, and implement innovative solutions Your Skills and Experience: 57 years of experience in managing large-scale multi-petabyte data infrastructure in a similar role Advanced knowledge of Linux system administration and internals, with proven ability to troubleshoot issues in Linux environments Deep expertise in at least one of the following technologies: Kafka, Spark, Cassandra/Scylla, or HDFS Strong working knowledge of Docker, Kubernetes, and Helm Experience with data access technologies such as Dremio and Presto Familiarity with workflow orchestration tools like Airflow and Prefect Exposure to cloud platforms such as AWS, GCP, or Azure Proficiency with CI/CD pipelines and version control systems like Git Understanding of best practices in data security and compliance Demonstrated ability to solve problems proactively and creatively with a results-oriented mindset Quick learner with excellent troubleshooting skills High degree of flexibility and adaptability About Us IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, weve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed. Show more Show less

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

What are we looking for Must have experience with at least one cloud platform (AWS, GCP, or Azure) AWS preferred Must have experience with lakehouse-based systems such as Iceberg, Hudi, or Delta Must have experience with at least one programming language (Python, Scala, or Java) along with SQL Must have experience with Big Data technologies such as Spark, Hadoop, Hive, or other distributed systems Must have experience with data orchestration tools like Airflow Must have experience in building reliable and scalable ETL pipelines Good to have experience in data modeling Good to have exposure to building AI-led data applications/services Qualifications and Skills 26 years of professional experience in a Data Engineering role. Knowledge of distributed systems such as Hadoop, Hive, Spark, Kafka, etc. Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

curatAId is seeking a Senior Snowflake Consultant on behalf of our client, a fast-growing organization focused on data- driven innovation. This role combines snowflake expertise with DevOps, DBT, Airflow t o support the development and operation of a modern, cloud-based enterprise data platform. The ideal candidate will be responsible for building and managing data infrastructure, developing scalable data pipelines, implementing data quality and governance frameworks and automating workflows for operational efficiency. To apply for this position, it is mandatory to register on our platform at www.curataid.com and give 10 minutes technical quiz on Snowflake skill. Title: Senior Data Engineer Level: Consultant/Deputy Manager/Manager/Senior Manager Relevant Experience: Minimum of 5+ years of hands-on experience on Snowflake with DevOps, DBT, Airflow Must Have Skill: Data Engineering, Snowflake, DBT, Airflow & DevOps Location: Mumbai, Gurgaon, Bengaluru, Chennai, Kolkata, Bhubaneshwar, Coimbatore, Ahmedabad Qualifications 5+ years of relevant snowflake in a data engineering context. (Must Have) 4+ years of relevant experience in DBT, Airflow & DevOps . (Must Have) Strong hands-on experience with data modelling, data warehousing and building high-volume ETL/ELT pipelines. Must have experience with Cloud Data Warehouses like Snowflake, Amazon Redshift, Google Big Query or Azure Synapse Experience with version control systems (GitHub, BitBucket, GitLab). Strong SQL expertise. Implement best practices for data storage management, security, and retrieval efficiency. Experience with pipeline orchestration tools (Fivetran, Stitch, Airflow, etc.). Coding proficiency in at least one modern programming language (Python, Java, Scala, etc.). Show more Show less

Posted 6 days ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Bangalore/Gurugram/Hyderabad YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join Zendesk as a Data Engineering Manager and lead a team of data engineers who deliver meticulously curated data assets to fuel business insights. Collaborate with Product Managers, Data Scientists, and Data Analysts to drive successful implementation of data products. We are seeking a leader with advanced skills in data infrastructure, data warehousing, and data architecture, as well as a proven track record of scaling BI teams. Be a part of our mission to embrace data and analytics and create a meaningful impact within our organization. You will foster the growth and development of a team of data engineers, design, build, and launch new data models and pipelines in production, and act as a player-coach to amplify the effects of your team's work. Foster connections with diverse teams to comprehend data requirements, help develop and support your team in technical architecture, project management, and product knowledge. Define processes for operational excellence in project management and system reliability and set direction for the team to anticipate strategic and scaling-related challenges. Foster a healthy and collaborative culture that embodies our values. What You Bring to the Role: - Bachelor's degree in Computer Science/Engineering or related field. - 7+ years of proven experience in Data Engineering and Data Warehousing. - 3+ years as a manager of data engineering teams. - Proficiency with SQL & any programming language (Python/Ruby). - Experience with Snowflake, BigQuery, Airflow, dbt. - Familiarity with BI Tools (Looker, Tableau) is desirable. - Proficiency in modern data stack and architectural strategies. - Excellent written and oral communication skills. - Proven track record of coaching/mentoring individual contributors and fostering a culture valuing diversity. - Experience leading SDLC and SCRUM/Agile delivery teams. - Experience working with globally distributed teams preferred. Tech Stack: - SQL - Python/Ruby - Snowflake - BigQuery - Airflow - dbt Please note that this position requires physical location in and working from Pune, Maharashtra, India. Zendesk software was built to bring calm to the chaotic world of customer service. We advocate for digital-first customer experiences and strive to create a fulfilling and inclusive workplace experience. Our hybrid working model allows for connection, collaboration, and learning in person at our offices globally, as well as remote work flexibility. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans. If you require an accommodation to participate in the hiring process, please email peopleandplaces@zendesk.com with your specific request.,

Posted 6 days ago

Apply

9.0 - 11.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who We Are Wayfair is moving the world so that anyone can live in a home they love a journey enabled by more than 3,000 Wayfair engineers and a data-centric culture. Wayfairs Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for an experienced Senior Machine Learning Scientist to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfairs advertising platform. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your ML expertise to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. What Youll do Design and build intelligent budget, tROAS, and SKU recommendations, and simulation-driven decisioning that extends beyond the current advertising platform capabilities. Lead the next phase of GenAI-powered creative optimization and automation to drive significant incremental ad revenue and improve supplier outcomes. Raise technical standards across the team by promoting best practices in ML system design and development. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalization, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimization. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. We Are a Match Because You Have : Bachelor&aposs or Masters degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of experience in building large scale machine learning algorithms. 4+ years of experience working in an architect or technical leadership position. Strong theoretical understanding of statistical models such as regression, clustering and ML algorithms such as decision trees, neural networks, transformers and NLP techniques. Proficiency in programming languages such as Python and relevant ML libraries (e.g., TensorFlow, PyTorch) to develop production-grade products. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication. Ability to partner cross-functionally to own and shape technical roadmaps Intellectual curiosity and a desire to always be learning! Nice to have Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. About Wayfair Inc. Wayfair is one of the worlds largest online destinations for the home. Through our commitment to industry-leading technology and creative problem-solving, we are confident that Wayfair will be home to the most rewarding work of your career. If youre looking for rapid growth, constant learning, and dynamic challenges, then youll find that amazing career opportunities are knocking. No matter who you are, Wayfair is a place you can call home. Were a community of innovators, risk-takers, and trailblazers who celebrate our differences, and know that our unique perspectives make us stronger, smarter, and well-positioned for success. We value and rely on the collective voices of our employees, customers, community, and suppliers to help guide us as we build a better Wayfair and world for all. Every voice, every perspective matters. Thats why were proud to be an equal opportunity employer. We do not discriminate on the basis of race, color, ethnicity, ancestry, religion, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, genetic information, or any other legally protected characteristic. We are interested in retaining your data for a period of 12 months to consider you for suitable positions within Wayfair. Your personal data is processed in accordance with our Candidate Privacy Notice (which can found here: https://www.wayfair.com/careers/privacy). If you have any questions regarding our processing of your personal data, please contact us at [HIDDEN TEXT]. If you would rather not have us retain your data please contact us anytime at [HIDDEN TEXT]. Show more Show less

Posted 6 days ago

Apply

9.0 - 11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonalds: One of the worlds largest employers with locations in more than 100 countries, McDonalds Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald&aposs global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking an experienced Data Architect to design, implement, and optimize scalable data solutions on Amazon Web Services (AWS) and / or Google Cloud Platform (GCP). The ideal candidate will lead the development of enterprise-grade data architectures that support analytics, machine learning, and business intelligence initiatives while ensuring security, performance, and cost optimization. Who we are looking for: Primary Responsibilities: Key Responsibilities Architecture & Design: Design and implement comprehensive data architectures using AWS or GCP services Develop data models, schemas, and integration patterns for structured and unstructured data Create solution blueprints, technical documentation, architectural diagrams, and best practice guidelines Implement data governance frameworks and ensure compliance with security standards Design disaster recovery and business continuity strategies for data systems Technical Leadership: Lead cross-functional teams in implementing data solutions and migrations Provide technical guidance on cloud data services selection and optimization Collaborate with stakeholders to translate business requirements into technical solutions Drive adoption of cloud-native data technologies and modern data practices Platform Implementation: Implement data pipelines using cloud-native services (AWS Glue, Google Dataflow, etc.) Configure and optimize data lakes and data warehouses (S3 / Redshift, GCS / BigQuery) Set up real-time streaming data processing solutions (Kafka, Airflow, Pub / Sub) Implement automated data quality monitoring and validation processes Establish CI/CD pipelines for data infrastructure deployment Performance & Optimization: Monitor and optimize data pipeline performance and cost efficiency Implement data partitioning, indexing, and compression strategies Conduct capacity planning and scaling recommendations Troubleshoot complex data processing issues and performance bottlenecks Establish monitoring, alerting, and logging for data systems Skill: Bachelors degree in computer science, Data Engineering, or related field 9+ years of experience in data architecture and engineering 5+ years of hands-on experience with AWS or GCP data services Experience with large-scale data processing and analytics platforms AWS Redshift, S3, Glue, EMR, Kinesis, Lambda AWS Data Pipeline, Step Functions, CloudFormation Big Query, Cloud Storage, Dataflow, Dataproc, Pub/Sub GCP Cloud Functions, Cloud Composer, Deployment Manager IAM, VPC, and security configurations SQL and NoSQL databases Big data technologies (Spark, Hadoop, Kafka) Programming languages (Python, Java, SQL) Data modeling and ETL/ELT processes Infrastructure as Code (Terraform, CloudFormation) Container technologies (Docker, Kubernetes) Data warehousing concepts and dimensional modeling Experience with modern data architecture patterns Real-time and batch data processing architectures Data governance, lineage, and quality frameworks Business intelligence and visualization tools Machine learning pipeline integration Strong communication and presentation abilities Leadership and team collaboration skills Problem-solving and analytical thinking Customer-focused mindset with business acumen Preferred Qualifications: Masters degree in relevant field Cloud certifications (AWS Solutions Architect, GCP Professional Data Engineer) Experience with multiple cloud platforms Knowledge of data privacy regulations (GDPR, CCPA) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonalds is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonalds provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonalds Capability Center India Private Limited (McDonalds in India) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonalds in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonalds in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment. Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. TheData Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. ? Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (seehereandhere). We are following suit. ? Whats in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company ? ?Heres what youll bring? Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure DataFactory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform(GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Title: AI Research Engineer Intern (Fresher) Reporting to: Lead – Research & Innovation Lab Location: remote/ Hybrid (Chennai, India) Engagement: 6-month, full-time paid internship with pre-placement-offer track 1. Why this role exists Stratsyn AI Technology Services is turbo-charging Stratsyn’s cloud-native Enterprise Intelligence & Management Suite —a modular SaaS ecosystem that fuses advanced AI, low-code automation, multimodal search, and next-generation “Virtual workforce” agents. The platform unifies strategic planning, document intelligence, workflow orchestration, and real-time analytics, empowering C-suite leaders to simulate scenarios, orchestrate execution, and convert insight into action with unmatched speed and scalability. To keep pushing that frontier, we need sharp, curious minds who can translate cutting-edge research into production-grade capabilities for this suite. This internship is our talent-funnel into future Research Engineer and Product Scientist roles. 2. What you’ll do (core responsibilities) % FocusKey Responsibility 30 %Rapid Prototyping & Experimentation – implement state-of-the-art papers (LLMs, graph learning, causal inference, agents), design ablation studies, benchmark against baselines, and iterate fast. 25 %Data Engineering for Research – build reproducible datasets, craft synthetic data when needed, automate ETL pipelines, and enforce experiment tracking (MLflow / Weights & Biases). 20 %Model Evaluation & Explainability – create evaluation harnesses (BLEU, ROUGE, MAPE, custom KPIs), visualize error landscapes, and generate executive-ready insights. 15 %Collaboration & Documentation – author tech memos, well-annotated notebooks, and contribute to internal knowledge bases; present findings in weekly research stand-ups. 10 %Innovation Scouting – scan arXiv, ACL, NeurIPS, ICML, and startup ecosystems; summarize high-impact research and propose areas for IP creation within the Suite. 3. What you will learn / outcomes to achieve Master the end-to-end research workflow: literature review → hypothesis → prototype → validation → deployment shadow. Deliver one peer-review-quality technical report and two production-grade proof-of-concepts for the Suite. Achieve a measurable impact (e.g., 8-10 % forecasting-accuracy lift or 30 % latency reduction) on a live micro-service. 4. Minimum qualifications (freshers welcome) B.E./B.Tech/M.Sc./M.Tech in CS, Data Science, Statistics, EE, or related (2024-2026 pass-out). Fluency in Python and at least one deep-learning framework (PyTorch preferred). Solid grasp of linear algebra, probability, optimization, and algorithms. Hands-on academic or personal projects in NLP, CV, time-series, or RL (GitHub links highly valued). 5. Preferred extras Publications or Kaggle/ML-competition record. Experience with distributed training (GPU clusters, Ray, Lightning) and experiment-tracking tools. Familiarity with MLOps (Docker, CI/CD, Kubernetes) or data-centric AI. Domain knowledge in supply-chain, fintech, climate, or marketing analytics. 6. Key attributes & soft skills First-principles thinker – questions assumptions, proposes novel solutions. Bias for action – prototypes in hours, not weeks; embraces agile experimentation. Storytelling ability – explains complex models in clear, executive-friendly language. Ownership mentality – treats the prototype as a product, not just a demo. 7. Tech stack you’ll touch Python | PyTorch | Hugging Face | TensorRT | LangChain | Neo4j/GraphDB | PostgreSQL | Airflow | MLflow | Weights & Biases | Docker | GitHub Actions | JAX (exploratory) 8. Internship logistics & perks Competitive monthly stipend + performance bonus. High-end workstation + GPU credits on our private cloud. Dedicated mentor and 30-60-90-day learning plan. Access to premium research portals and paid conference passes. Culture of radical candor, weekly brown-bag tech talks, and hack days. Fast-track to full-time AI Research Engineer upon successful completion. 9. Application process Apply via email: Send résumé, brief statement of purpose, and GitHub/portfolio links to HR@stratsyn.ai . Online coding assessment: algorithmic + ML fundamentals. Technical interview (2 rounds): deep dive into projects, math, and research reasoning. Culture-fit discussion: with Research Lead & CPO. Offer & onboarding – target turnaround < 3 weeks.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Job Title: DevOps Engineer Location: GIFT City, Gandhinagar Shift Timings: 9:00 AM - 6:00 PM About Us We are a dynamic and fast-paced organization operating in the algorithmic trading domain , seeking a highly motivated DevOps Engineer who can quickly grasp new concepts and take charge in managing our backend infrastructure. You will play a crucial role in ensuring smooth trading operations by optimizing and maintaining scalable infrastructure. Key Responsibilities Design and Implementation: Develop robust solutions for monitoring and metrics infrastructure to support algorithmic trading. Technical Support: Provide hands-on support for live trading environments on Linux servers, ensuring seamless trading activities. Problem Resolution: Leverage technical and analytical skills to identify and resolve issues related to application and business functionality promptly. Database Management: Administer databases and execute SQL queries to perform on-demand analytics, driving data-informed decision-making. Application & Network Management: Manage new installations, troubleshoot network issues, and optimize network performance for a stable trading environment. Python Infrastructure Management: Oversee Python infrastructure including Airflow, logs, monitoring, and alert systems; address any operational issues that arise. Airflow Management: Develop new DAGs (Python scripts) and optimize existing workflows for efficient trading operations. Infrastructure Optimization: Manage and optimize infrastructure using tools such as Ansible, Docker, and automation technologies. Cross-Team Collaboration: Collaborate with various teams to provide operational support for different projects. Proactive Monitoring: Implement monitoring solutions to detect and address potential issues before they impact trading activities. Documentation: Maintain comprehensive documentation of trading systems, procedures, and workflows for reference and training. Regulatory Compliance: Ensure full adherence to global exchange rules and regulations, maintaining compliance with all legal requirements. Global Market Trading: Execute trades on global exchanges during night shifts, utilizing algorithmic strategies to optimize trading performance. Requirements Experience: 2-4 years in a DevOps role, preferably in a trading environment. Education: Bachelor’s degree in Computer Science or a related field. Technical Skills Proficiency in Linux, Python, and SQL Hands-on experience with Ansible, Docker, and automation tools Experience managing CI/CD pipelines for frequent code testing and deployment Good To Have Advanced Python proficiency NISM certification Key Attributes Strong troubleshooting and problem-solving skills Excellent communication skills Ability to handle multiple tasks efficiently with strong time management Proactive and ownership-driven attitude Trading Experience Experience trading in NSE and global markets is required Perks And Benefits Competitive salary and attractive perks Five-day work week (Monday to Friday) Four weeks of paid leave annually plus festival holidays Annual/biannual offsites and team gatherings Transparent and flat hierarchy fostering growth opportunities Opportunity to work in a dynamic, fast-paced environment with a passionate team of professionals

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies