Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a strong Data Engineer with DATA ANALYSIS/PROFILING SKILLS for our Enterprise Data Organization to develop and manage data pipelines (data ingestion, transformation, storage etc.) for an Azure/Snowflake cloud-based data analytics platform. The candidate will possess strong technical, analytical, programming, and critical thinking skills. The ideal candidate will have good experience with data transformation, data modeling, master data management, and metadata management. The candidate must also have excellent communication skills; leadership skills will be a plus. This is key to this role as the candidate will be will working closely with senior leadership, product team and potentially overseeing a team of engineers. Essential Functions DATA ANALYSIS/PROFILING SKILLS for our Enterprise Data Organization to develop and manage data pipelines (data ingestion, transformation, storage etc.) for an Azure/Snowflake cloud-based data analytics platform. Qualifications Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow Would be a plus Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW) We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About Us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Required Skills Must Have: Solid knowledge and experience of supervised, unsupervised machine learning algorithms. For e.g (but not limited to): linear regressions, bayesian regressions, multi objective optimization techniques, classifiers, cluster analysis, dimension reduction etc. Understanding of technicality used for retail analytics across loyalty, customer analytics, assortment, promotion and marketing Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow,, Pytorch, MLlib) Advanced level programming in SQL and Python/Pyspark to guide teams Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Experience with model explainability and interpretability techniques Multi-task and manage multiple deadlines. Responsible for incorporating client/user feedback into the Product Ability to think through complex user scenarios and design simple yet effective user interactions Good Communication and presentation skills Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a hands-on Engineering Manager to establish and lead a high-performing data engineering team for ROS in India. Collaborating with US-based ROS and Data Engineering teams, as well as the People Science & Analytics team, you will build scalable data pipelines, robust infrastructure, and impactful insights. Collaborating with the US ROS Engineering Manager, you will set high technical standards, champion leadership principles, and drive innovation while shaping the future of data engineering at Roblox. You Will Build and Lead: Attract, hire, mentor, and inspire a team with varied strengths of exceptional engineers. Cultivate a collaborative and inclusive environment where everyone thrives. Set the Bar: Establish and maintain a high standard for technical excellence & data quality. Ensure your team delivers reliable, scalable, and secure solutions that adhere to Roblox's engineering principles. Prepared to be hands-on with the ability to code & contribute to reviews and technical design discussions Cross-Functional Collaboration: Partner with data scientists & analysts, product & engineering, and other stakeholders to understand business needs and translate them into technical solutions. Strategic Planning: Contribute to the overall engineering strategy for Roblox India. Find opportunities for innovation and growth, and prioritize projects that deliver the most value to our users. Continuous Improvement: Cultivate a culture of learning and continuous improvement within your team. Encourage experimentation, knowledge sharing, and adoption of new technologies. You Have Proven Leadership: Demonstrated experience leading and scaling data engineering teams, ideally in a high-growth environment. Technical Expertise: Solid understanding of data engineering principles, and best practices for data governance. Experience building scalable data pipelines (Airflow or similar orchestration frameworks). Proficiency in SQL and relational databases. Familiarity with data warehouse solutions (e.g., Snowflake, Redshift, BigQuery) and data streaming platforms (e.g., Kafka, Kinesis, Spark). Knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) Roblox Alignment: Strong alignment with Roblox's leadership principles, including a focus on respect, safety, creativity, and community. Excellent Communication: Exceptional communication and interpersonal skills. Ability to build rapport with team members, stakeholders, and leaders across the organization. Problem-Solving: Strong analytical and problem-solving skills. Ability to break down complex challenges and develop creative solutions. Passion for Roblox: A genuine excitement for our platform and the possibilities of the metaverse. Roles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity. Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderābād
On-site
Job Req ID: 47377 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are looking for a hands-on Data Engineer with 8+ years of experience to build, manage, and scale data pipelines, deploy ML solutions , and enable advanced data visualizations and dashboards for business consumption. The ideal candidate will have a strong engineering mindset, deep understanding of data infrastructure, and prior experience working on self-managed or private cloud (VM-based) deployments. Candidates from premier institutes ( IITs, NITs, or equivalent Tier-1/2 schools ) are strongly preferred. Key Responsibilities: Design and build robust, scalable, and secure data pipelines (batch and real-time) to support AI/ML workloads and BI dashboards. Collaborate with data scientists to operationalize ML models , including containerization (Docker), CI/CD pipelines, model serving (FastAPI/Flask), and monitoring. Develop and maintain interactive dashboards using tools such as Plotly Dash, Power BI, or Streamlit to visualize key insights for business stakeholders. Manage deployments and orchestration on Vi’s local private cloud infrastructure (VM-based setups) . Work closely with analytics, business, and DevOps teams to ensure reliable data availability and system health. Optimize ETL/ELT workflows for performance and scale across large telecom datasets. Implement data quality checks, governance, and logging/monitoring solutions for all production workloads. Required Qualifications & Skills: 8+ years of experience in data engineering, platform development, and/or ML deployment. Prefered B.Tech/M.Tech from Tier-1 or Tier-2 institutes (IITs, NITs, IIITs, BITS, etc.) . Strong proficiency in Python , SQL , and data pipeline frameworks (Airflow, Luigi, or similar). Solid experience with containerization (Docker), scripting, and deploying production-grade ML or analytics services. Hands-on experience with dashboarding and visualization tools such as: Power BI / Tableau / Streamlit Custom front-end dashboards (nice to have) Experience working on self-managed VMs, bare-metal servers, or local private clouds (not just public cloud services). Familiarity with ML deployment architectures , REST APIs, and performance tuning. Preferred Skills: Experience with Kafka, Spark, or distributed processing systems . Exposure to MLOps tools (MLflow, DVC, Kubeflow). Understanding of telecom data and analytics use cases. Ability to lead and mentor junior engineers or analysts.
Posted 1 week ago
5.0 years
8 - 10 Lacs
Thiruvananthapuram
Remote
5 - 7 Years 1 Opening Kochi, Trivandrum Role description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview: We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities: Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills: 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills: Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
5.0 years
3 - 9 Lacs
Mumbai
On-site
JOB DESCRIPTION Join our dynamic team as a Sr. Lead Software Engineer, where you will have the opportunity to solve complex problems and contribute to our innovative projects. With us, you can enhance your skills in Python, PySpark, and cloud architecture, while working in an inclusive and respectful team environment. This role offers immense growth potential and a chance to work with cutting-edge technologies. As a Sr. Lead Software Engineer- Python / Spark Big Data at JPMorgan Chase within the Capital Reporting product, you will be executing software solutions, designing, developing, and troubleshooting technical issues. We value diversity, equity, inclusion, and respect in our team culture. This role provides an opportunity to contribute to software engineering communities of practice and events that explore new and emerging technologies. You will have the chance to proactively identify hidden problems and patterns in data and use these insights to promote improvements to coding hygiene and system architecture. Job responsibilities: Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills: Formal training or certification on Python or PySpark concepts and 5+ years applied experience Demonstrated knowledge of software applications and technical processes within a cloud or microservices architecture. Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Preferred qualifications, capabilities, and skills: Exposure to cloud technologies (Airflow, Astronomer, Kubernetes, AWS, Spark, Kafka) Experience with Big Data solutions or Relational DB. Experience in Financial Service Industry is nice to have. ABOUT US
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity. Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
India
On-site
Agnito Is Hiring: Data Engineer – AI Infrastructure Location: Bhopal (Work From Office) Vacancy: 1 Experience: 6+ years in building AI/ML data pipelines in cloud environments Package: No bar for the right candidate Role Overview: Agnito Technologies is looking for an experienced Data Engineer to join our AI infrastructure team. The ideal candidate will be responsible for designing, developing, and maintaining robust, scalable data pipelines to power machine learning workflows and AI models in cloud environments. Key Skills Required: ETL Pipeline Development Apache Airflow for workflow orchestration Apache Spark for large-scale data processing BigQuery / Snowflake for cloud-based data warehousing Feature Store Integration to support ML model training and serving Strong understanding of cloud-native data engineering principles (GCP, AWS, or Azure) Eligibility Criteria: 6+ years of hands-on experience as a Data Engineer Proven track record in designing and maintaining production-grade ETL pipelines Experience enabling AI/ML data pipelines in cloud environments Solid understanding of performance tuning and data optimization Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Climate Systems Intern – Urban Farming (Paid Internship) Location: Mumbai (Hybrid) Duration: 4 to 6 weeks Stipend: ₹15,000 - ₹20,000 + bonus for strong delivery What You’ll Do Design a simple, energy-efficient climate system (for temp, humidity, airflow) in ~50-100 sq ft farm units Run airflow / heat load simulations (CFD tools, psychrometric modeling, etc.) Prototype your system using fans, sensors, basic controllers (we’ll support with parts/tools) Test performance on-site with a live Cosy pilot unit Document the entire design: BOM, diagrams, test data, energy usage, and suggestions for future improvements You Might Be Right If You: Are a final-year Mechanical / Electrical / HVAC / Mechatronics student or recent graduate Understand basic thermodynamics, VPD, heat and mass transfer, airflow Can use (or learn quickly): SimScale, CoolPack, or similar CFD tools Arduino/Raspberry Pi for basic control logic Love prototyping - you’d rather build a bad v1 than stare at a blank spec sheet Want to apply your skills to something that actually gets built What We’re Looking For Temperature and humidity must be controllable and energy-efficient System must be tested and shown working with real farm data Bonus if you can find ways to avoid AC or optimize its runtime Full documentation required (don’t worry, we’ll guide you on format) How to Apply: Send your resume + short note on why this excites you to hr@cosyfarmers.in Subject: Climate Systems Intern – [Your Name] Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview We are looking for a hands-on Data Engineer with 8+ years of experience to build, manage, and scale data pipelines, deploy ML solutions , and enable advanced data visualizations and dashboards for business consumption. The ideal candidate will have a strong engineering mindset, deep understanding of data infrastructure, and prior experience working on self-managed or private cloud (VM-based) deployments. Candidates from premier institutes ( IITs, NITs, or equivalent Tier-1/2 schools ) are strongly preferred. Key Responsibilities Design and build robust, scalable, and secure data pipelines (batch and real-time) to support AI/ML workloads and BI dashboards. Collaborate with data scientists to operationalize ML models, including containerization (Docker), CI/CD pipelines, model serving (FastAPI/Flask), and monitoring. Develop and maintain interactive dashboards using tools such as Plotly Dash, Power BI, or Streamlit to visualize key insights for business stakeholders. Manage deployments and orchestration on Vi’s local private cloud infrastructure (VM-based setups). Work closely with analytics, business, and DevOps teams to ensure reliable data availability and system health. Optimize ETL/ELT workflows for performance and scale across large telecom datasets. Implement data quality checks, governance, and logging/monitoring solutions for all production workloads. Required Qualifications & Skills 8+ years of experience in data engineering, platform development, and/or ML deployment. Prefered B.Tech/M.Tech from Tier-1 or Tier-2 institutes (IITs, NITs, IIITs, BITS, etc.). Strong proficiency in Python, SQL, and data pipeline frameworks (Airflow, Luigi, or similar). Solid experience with containerization (Docker), scripting, and deploying production-grade ML or analytics services. Hands-on experience with dashboarding and visualization tools such as: Power BI / Tableau / Streamlit Custom front-end dashboards (nice to have) Experience working on self-managed VMs, bare-metal servers, or local private clouds (not just public cloud services). Familiarity with ML deployment architectures, REST APIs, and performance tuning. Preferred Skills Experience with Kafka, Spark, or distributed processing systems. Exposure to MLOps tools (MLflow, DVC, Kubeflow). Understanding of telecom data and analytics use cases. Ability to lead and mentor junior engineers or analysts. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Let’s be #BrilliantTogether ISS STOXX is looking for a Cloud Security Engineer to join our team in Mumbai, India. Overview We are looking for a talent to bring technical expertise to the development and deployment of our cutting-edge financial intelligence platform. In this role, you will leverage your technical expertise and innovative mindset to lead the design, implementation, operation and optimization of our platform infrastructure, ensuring its ability to deliver efficient and reliable data services to our global client base. As a senior member of technical team, you will collaborate with cross-functional peers and stakeholders to drive continuous improvement initiatives and ensure our platform remains at the forefront of investment management technology. Responsibilities Contribute to the security and operation of Stoxx's GCP platform infrastructure. Ensure the platform's security, reliability, and efficiency meet regulatory, business and client requirements. Work with the Principal Cloud Security Engineer to implement and enforce a cloud security posture. Work with the extended Information Security Office (ISO) to ensure cloud security standards are aligned with ISO standards. Collaborate with cross-functional teams to implement the cloud security roadmap. Drive continuous improvement initiatives to enhance pipeline performance and customer satisfaction. Keep abreast of emerging trends and technologies in cloud security and operations, and promote them across engineering and business functions. Conduct audits and system reviews to ensure compliance with latest regulatory and security standards. Perform investigations during security incidents, identifying the root cause and taking action to prevent it from happening again. Requirements 3+ years' experience in Cloud Security on any of the major cloud providers. Experience with the development and deployment of large-scale, complex security platforms. Good knowledge of GCP products across database, serverless, containerization and API. Experience working in a global or multinational team setting. Strong communication and collaboration skills. Proven ability to drive innovation and continuous improvement initiatives. Focus on simplicity, automation and observability. Bachelor's or Master's degree in Computer Science or related field. Some or all of Wiz, SonarQube, Tenable, PaloAlto, Terraform, Python, GitHub Actions, Apigee, Airflow and any SIEM tool. Ability to create scripts/tools as they relate to security. Ability to troubleshoot, trace and diagnose API endpoint and network security issues. Knowledge of security protocols and mechanisms. #MIDSENIOR #STOXX What You Can Expect From Us At ISS STOXX, our people are our driving force. We are committed to building a culture that values diverse skills, perspectives, and experiences. We hire the best talent in our industry and empower them with the resources, support, and opportunities to grow—professionally and personally. Together, we foster an environment that fuels creativity, drives innovation, and shapes our future success. Let’s empower, collaborate, and inspire. Let’s be #BrilliantTogether. About ISS STOXX ISS STOXX GmbH is a leading provider of research and technology solutions for the financial market. Established in 1985, we offer top-notch benchmark and custom indices globally, helping clients identify investment opportunities and manage portfolio risks. Our services cover corporate governance, sustainability, cyber risk, and fund intelligence. Majority-owned by Deutsche Börse Group, ISS STOXX has over 3,400 professionals in 33 locations worldwide, serving around 6,400 clients, including institutional investors and companies focused on ESG, cyber, and governance risk. Clients trust our expertise to make informed decisions for their stakeholders' benefit. STOXX® and DAX® indices comprise a global and comprehensive family of more than 17,000 strictly rules-based and transparent indices. Best known for the leading European equity indices EURO STOXX 50®, STOXX® Europe 600 and DAX®, the portfolio of index solutions consists of total market, benchmark, blue-chip, sustainability, thematic and factor-based indices covering a complete set of world, regional and country markets. STOXX and DAX indices are licensed to more than 550 companies around the world for benchmarking purposes and as underlyings for ETFs, futures and options, structured products, and passively managed investment funds. STOXX Ltd., part of the ISS STOXX group of companies, is the administrator of the STOXX and DAX indices under the European Benchmark Regulation. Visit our website: https://www.issgovernance.com View additional open roles: https://www.issgovernance.com/join-the-iss-team/ Institutional Shareholder Services (“ISS”) is committed to fostering, cultivating, and preserving a culture of diversity and inclusion. It is our policy to prohibit discrimination or harassment against any applicant or employee on the basis of race, color, ethnicity, creed, religion, sex, age, height, weight, citizenship status, national origin, social origin, sexual orientation, gender identity or gender expression, pregnancy status, marital status, familial status, mental or physical disability, veteran status, military service or status, genetic information, or any other characteristic protected by law (referred to as “protected status”). All activities including, but not limited to, recruiting and hiring, recruitment advertising, promotions, performance appraisals, training, job assignments, compensation, demotions, transfers, terminations (including layoffs), benefits, and other terms, conditions, and privileges of employment, are and will be administered on a non-discriminatory basis, consistent with all applicable federal, state, and local requirements. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a Senior Manager-DevOps Engineer to join out technology team in Clarivate . You will be responsible for providing strategic leadership for DevOps, shaping technical and operational strategies, oversee project execution, collaborating with cross-functional teams, mentoring team members for professional growth. About You – Experience, Education, Skills, And Accomplishments 7+ years of leadership experience working with cross-functional teams (business and technology teams) in a dynamic environment. At least 10 years of professional experience with minimum 6 years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Azure DevOps, Gitlab, TeamCity, AWS Codepipeline, Packer, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). Hands-on experience with Docker and Kubernetes, including building Docker files and images, establishing Docker image repositories, and creating, managing, and orchestrating a Kubernetes based infrastructure in cloud or on-prem. Comfortable writing scripts/services that pull and manipulate data from heterogeneous data sources. It would be great, if you also had , Strong understanding of data pipelines, ETL/ELT processes, and cloud data platforms (e.g., AWS, Azure, GCP) Familiarity with modern data tools (e.g., Airflow, DBT, Snowflake, Databricks, Kafka). Knowledge on cloud-native software architectures and based on microservices, e.g., API management, autoscaling, service discovery, service mesh, service gateways. What will you be doing in this role? Provide leadership and technical guidance to coach, motivate, and lead team members to their optimum performance levels and career development. Ability to communicate technical information to non-technical stakeholders. Develop Strong Architecture and Design using best practices, patterns, and business acumen. Drive analysis, design, and delivery of quality technical solutions, projects in line with product roadmaps, customer expectations, and internal priorities, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Develop and Support Quarterly plans for IP Product Segment Collaborate with cross-functional teams to analyze, design, and develop software solutions. Stay up to date with emerging trends and technologies in DevOps and cloud computing. Participate in the testing and deployment of software solutions. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. About The Team Cloud Architecture and DevOps Engineering is a part of Product Engineering in Clarivate IP Business Unit. This team is responsible for driving cloud native initiatives, CI/CD standardization support in improving DevOps engineering practices and building future proof cloud solution. Working Hours This is a full-time opportunity with Clarivate. 9 hours per day including lunch break. You should be flexible with working hours to align with globally distributed teams and stakeholders At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 1 week ago
10.0 - 13.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Sr Data Engineering Location: Bangalore Business & Team: Technology Team is responsible for the world leading application of technology and operations across every aspect of CommBank, from innovative product platforms for our customers to essential tools within our business. We also use technology to drive efficient and timely processing, an essential component of great customer service. CommBank is recognised as leading the industry in IT and operations with its world-class platforms and processes, agile IT infrastructure, and innovation in everything from payments to internet banking and mobile apps. The Group Security (GS) team protects the Bank and our customers from cyber compromise, through proactive management of cyber security, privacy, and operational risk. Our team includes: Cyber Strategy & Performance Cyber Security Centre Cyber Protection & Design Cyber Delivery Cyber Data Engineering Cyber Data Security Identity & Access Technology The Group Security Senior Data Engineering team provides specialised data services and platforms for the CommBank group & is accountable for developing Group’s data strategy, data policy & standards, governance and set requirements for data enablers/tools. The team is also accountable to facilitate a community of practitioners to share best practice and build data talent and capabilities. Impact & contribution :- To ensure the Group achieves a sustainable competitive advantage through data engineering, you will play a key role in supporting and executing the Group's data strategy. We are looking for an experienced Data Engineer to join our Group Security Team, which is part of the wider Cyber Security Engineering practice. In this role, you will be responsible for setting up the Group Security Data Platform to ingest data from various organizations' security telemetry data, along with additional data assets and data products. This platform will provide security controls and services leveraged across the Group. Roles & Responsibilities You will be expected to perform the following tasks in a manner consistent with CBA’s Values and People Capabilities. CORE RESPONSIBILITIES: Possesses hands-on technical experience working in AWS. The individual should have knowledge about AWS services like EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue, EMR, DynamoDB, IAM, SecretManager, KMS, Step functions, SQS,SNS, Cloud Watch. The individual should possess a robust set of technical and soft skills and be an excellent AWS Data Engineer with a focus on complex Automation and Engineering Framework development. Being well-versed in Python is mandatory, and experience in developing complex frameworks using Python is required. Passionate about Cloud/DevSecOps/Automation and possess a keen interest in solving complex problems systematically. Drive the development and implementation of scalable data solutions and data pipelines using various AWS services. Possess the ability to work independently and collaborate closely with team members and technology leads. Exhibit a proactive approach, constantly seeking innovative solutions to complex technical challenges. Can take responsibility for nominated technical assets related to areas of expertise, including roadmaps and technical direction. Can own and develop technical strategy, overseeing medium to complex engineering initiatives. Essential Skills:- About 10-13 years of experience as a Data Engineering professional in a data-intensive environment. The individual should have strong analytical and reasoning skills in the relevant area. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue, EMR, DynamoDB, IAM, SecretManager, Step functions, SQS,SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Proficiency in SQL for efficient querying, managing databases, handling complex queries, and optimizing query performance. Excellent automation skills are expected in areas such as Automating the testing framework using tools such as PyPy, Pytest, and various test cases including unit, integration, functional tests, and mockups. Automating the data pipeline and expediting tasks such as data ingestion and transformation. API-based automated and integrated calls(REST, cURL, authentication & authorization, tokens, pagination, openApi, Swagger) Implementing advanced engineering techniques and handling ad hoc requests to automate processes on demand. Implementing automated and secured file transfer protocols like XCOM, FTP, SFTP, and HTTP/S Experience with Terraform, Jenkins, Teracity and Artifactory is essential as part of DevOps. Additionally, Docker and Kubernetes are also considered. Proficiency in building orchestration workflows using Apache Airflow. Strong understanding of streaming data processing concepts, including event-driven architectures. Familiarity with CI/CD pipeline development, such as Jenkins. Extensive experience and understanding in Data Modelling, SCD Types, Data Warehousing, and ETL processes. Excellent experience with GitHub or any preferred version control systems. Expertise in data pipeline development using various data formats/types. Mandatory knowledge and experience in big data processing using PySpark/Spark and performance optimizations of applications Proficiency in handling various file formats (CSV, JSON, XML, Parquet, Avro, and ORC) and automating processes in the big data environment. Ability to use Linux/Unix environments for development and testing. Should be aware of security best practices to protect data and infrastructure, including encryption, tokenization, masking, firewalls, and security zones. Well-structured documentation skills and the ability to create a well-defined knowledge base. Certifications such as AWS Certified Data Analytics/Engineer/Developer – Specialty or AWS Certified Solutions Architect. Should be able to perform extreme engineering and design a robust, efficient, and cost-effective data engineering pipelines which are highly available and dynamically scalable on demand. Enable the systems to effectively respond to high demands and heavy loads maintaining the high throughput and high I/O performance with no data loss Own and lead E2E Data engineering life cycle right from Requirement gathering, design, develop, test, deliver and support as part of DevSecOPS process. Must demonstrate skills and mindset to implement encryption methodologies like SSL/TLS and data encryption at rest and in transit and other data security best practices Hands on work experience with data design tools like Erwin and demonstrate the capabilities of building data models, data warehouse, data lakes, data assets and data products Must be able to constructively challenge the status quo and lead to establish data governance, metadata management, ask the right questions, design with right principles Education Qualification :- A Bachelor's or Master's degree in Engineering, specializing in Computer Science, Information Technology or relevant qualifications. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 29/06/2025 Show more Show less
Posted 1 week ago
5.0 - 8.0 years
15 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Required Key skills: Must Have: GCP Big Query, GCP Composure; GCP DataProc; Airflow, SQL, Hive, HDFS Architecture, Python, PySpark Good to have : GCP other services, Other Cloud, NoSQL Dbs
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Team: Core Engineering Reliability Team Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times. Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements. Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists. Serve as the data domain expert, mastering the details of our incident management infrastructure. Take full ownership of problems from ambiguous requirements through rapid iterations. Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues. Cultivate strong relationships between teams that produce data and those that build insights. Qualifications Minimum Qualifications / Your background: BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices. Proficiency in Python, SQL, and data platforms like DataBricks Proficiency in relational databases and query authoring (SQL). Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements. Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools. Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka). Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team. Well versed in modern software development practices (Agile, TDD, CICD) Desirable Qualifications Demonstrated ability to design and operate data infrastructure that deliver high reliability for our customers. Familiarity working with datasets like Monitoring, Observability, Performance, etc.. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
India
On-site
It takes powerful technology to connect our brands and partners with an audience of hundreds of millions of people. Whether you’re looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business—and the world. Job Location: Hyderabad (Hybrid Work Model) The Data and Common Services (DCS) team within the Yahoo Advertising Engineering organization is responsible for the Advertising core data infrastructure and services that provide common, horizontal services for user and contextual targeting, privacy and analytics. We are looking for a talented junior or mid level engineer who can design, implement, and support robust, scalable and high quality solutions related to Advertising Targeting, Identity, Location and Trust & Verification. As a member of the team, you will be helping our Ad platforms to deliver highly accurate and relevant Advertising experience for our consumers and for the web at large. Job Description Design and code backend Java applications and services. Emphasis is placed on implementing maintainable, scalable, systems capable of handling billions of requests per day. Analyze business and technical requirements and design solutions that meet those needs. Collaborate with project managers to develop and clarify requirements Work with Operations Engineers to ensure applications are operations ready and able to be effectively monitored using automated methods Troubleshoot production issues related to the team’s applications. Effectively manage day-to-day tasks to meet scheduled commitments. Be able to work independently. Collaborate with programmers both on their team and on other teams Skills and Education B.Tech/BE in Computer Science or equivalent technical discipline 8+ years of experience designing and programming in a Unix/Linux environment Excellent written and verbal communication skills, e.g., the ability to explain the work in plain language Experience delivering innovative, customer-centric products at high scale Technical with a track record of successful delivery as individual contributor Experience with building robust, scalable, distributed services Execution experience in fast-paced environments and performance driven culture Experience with big data technologies, such as Spark, Hadoop, and Airflow Knowledge of CI/CD and DevOps tools and processes Strong programming skills in Java, Python, or Scala Solid understanding of RDBMS and general database concepts Must have extensive technical knowledge and experience with distributed systems Must have strong programming, testing, and troubleshooting skills. Experience in public cloud such as AWS. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experience as a Senior Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languages: Python, Java, Go lang etc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team. Show more Show less
Posted 1 week ago
9.0 - 10.0 years
12 - 14 Lacs
Hyderabad
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Airflow/Data Flow/Data Lake * Optimize performance & scalability of ETL processes with SQL & Python
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Sanas is revolutionizing the way we communicate with the world’s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we’ve successfully secured over $100 million in funding. Our innovation have been supported by the industry’s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you’re not just adopting a product; you’re investing in the future of communication. We’re looking for a sharp, hands-on Data Engineer to help us build and scale the data infrastructure that powers cutting-edge audio and speech AI products. You’ll be responsible for designing robust pipelines, managing high-volume audio data, and enabling machine learning teams to access the right data — fast. As one of the first dedicated data engineers on the team, you'll play a foundational role in shaping how we handle data end-to-end, from ingestion to training-ready features. You'll work closely with ML engineers, research scientists, and product teams to ensure data is clean, accessible, and structured for experimentation and production. Key Responsibilities : Build scalable, fault-tolerant pipelines for ingesting, processing, and transforming large volumes of audio and metadata Design and maintain ETL workflows for training and evaluating ML models, using tools like Airflow or custom pipelines Collaborate with ML research scientists to make raw and derived audio features (e.g., spectrograms, MFCCs) efficiently available for training and inference Manage and organize datasets, including labeling workflows, versioning, annotation pipelines, and compliance with privacy policies Implement data quality, observability, and validation checks across critical data pipelines Help optimize data storage and compute strategies for large-scale training Qualifications : 2–5 years of experience as a Data Engineer, Software Engineer, or similar role with a focus on data infrastructure Proficient in Python, SQL, and working with distributed data processing tools (e.g., Spark, Dask, Beam) Experience with cloud data infrastructure (AWS/GCP), object storage (e.g.,S3), and data orchestration tools Familiarity with audio data and its unique challenges (large file sizes, time-series features, metadata handling) is a strong plus Comfortable working in a fast-paced, iterative startup environment where systems are constantly evolving Strong communication skills and a collaborative mindset — you’ll be working cross-functionally with ML, infra, and product teams Nice to Have : Experience with data for speech models like ASR, TTS, or speaker verification Knowledge of real-time data processing (e.g., Kafka, WebSockets, or low-latency APIs) Background in MLOps, feature engineering, or supporting model lifecycle workflows Experience with labeling tools, audio annotation platforms, or human-in-the-loop systems Joining us means contributing to the world’s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. You'll be part of a team exploring the vast potential of an increasingly sonic future Show more Show less
Posted 1 week ago
8.0 - 13.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Role: Technical Project Manager Location: Gachibowli, Hyderabad Duration: Full time Timings: 5:30pm - 2:00am IST Note: Looking for Immediate Joiners only (15-30 Days Notice) Job Summary: We are seeking a Technical Project Manager with a strong data engineering background to lead and manage end-to-end delivery of data platform initiatives. The ideal candidate will have hands-on exposure to AWS, ETL pipelines, Snowflake, DBT , and must be adept at stakeholder communication, agile methodologies, and cross-functional coordination across engineering, data, and business teams. Key Responsibilities: Plan, execute, and deliver data engineering and cloud-based projects within scope, budget, and timeline. Work closely with data architects, engineers, and analysts to manage deliverables involving ETL pipelines , Snowflake data warehouse , and DBT models . Lead Agile/Scrum ceremonies sprint planning, backlog grooming, stand-ups, and retrospectives. Monitor and report project status, risks, and issues to stakeholders and leadership. Coordinate cross-functional teams across data, cloud infrastructure, and product teams . Ensure adherence to data governance, security , and compliance standards throughout the lifecycle. Manage third-party vendors or consultants as required for data platform implementations. Own project documentation including project charters, timelines, RACI matrix, risk registers, and post-implementation reviews. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, Information Systems, or related field (Masters preferred). 8+ years in IT with 3-5 years as a Project Manager in data-focused environments. Hands-on understanding of: AWS services (e.g., S3, Glue, Lambda, Redshift) ETL/ELT frameworks and orchestration Snowflake Data Warehouse DBT (Data Build Tool) for data modeling Familiar with SQL, data pipelines , and data quality frameworks . Experience using project management tools like JIRA, Confluence, MS Project, Smartsheet. PMP, CSM, or SAFe certifications preferred. Excellent communication, presentation, and stakeholder management skills.
Posted 1 week ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Role: Senior Data Engineer Location: Indore Job Description: Build and maintain data pipelines for ingesting and processing structured and unstructured data. Ensure data accuracy and quality through validation checks and sanity reports. Improve data infrastructure by automating manual processes and scaling systems. Support internal teams (Product, Delivery, Onboarding) with data issues and solutions. Analyze data trends and provide insights to inform key business decisions. Collaborate with program managers to resolve data issues and maintain clear documentation. Must-Have Skills: Proficiency in SQL, Python (Pandas, NumPy), and R Experience with ETL tools (e.g., Apache NiFi, Talend, AWS Glue) Cloud experience with AWS (S3, Redshift, EMR, Athena, RDS) Strong understanding of data modeling, warehousing, and data validation Familiarity with data visualization tools (Tableau, Power BI, Looker) Experience with Apache Airflow, Kubernetes, Terraform, Docker Knowledge of data lake architectures, APIs, and custom data formats (JSON, XML, YAML) Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
What makes Techjays an inspiring place to work: At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a mid-level AI Implementation Engineer with a strong Python background to join our AI initiatives team. In this role, you will help design and develop production-ready systems that combine information retrieval, vector databases, and large language models (LLMs) into scalable Retrieval-Augmented Generation (RAG) pipelines. You’ll work closely with AI researchers, backend engineers, and data teams to bring generative AI use cases to life across multiple domains. Key Responsibilities: Develop and maintain scalable Python services that implement AI-powered retrieval and generation workflows. Build and optimize vector-based retrieval pipelines using tools like FAISS , Pinecone , or Weaviate . Integrate LLMs via APIs (e.g., OpenAI, Hugging Face) using orchestration frameworks such as LangChain , LlamaIndex , etc. Collaborate on system architecture, API design, and data flow to support RAG systems. Monitor, test, and improve the performance and accuracy of AI features in production. Work in cross-functional teams with product, data, and ML stakeholders to deploy AI solutions quickly and responsibly. Requirements: 3–5 years of hands-on experience in Python development, with focus on backend or data-intensive systems. Experience with information retrieval concepts and tools (e.g., Elasticsearch, vector search engines). Familiarity with LLM integration or orchestration tools (LangChain, LlamaIndex, etc.). Working knowledge of RESTful API development, microservices, and containerization (Docker). Solid software engineering practices including Git, testing, and CI/CD pipelines. Nice to have: Exposure to prompt engineering or fine-tuning LLMs. Experience deploying cloud-based AI applications (AWS, GCP, or Azure). Familiarity with document ingestion pipelines and unstructured data processing. Understanding of MLOps tools and practices (e.g., MLflow, Airflow). What we offer: Best-in-class packages. Paid holidays and flexible time-off policies. Casual dress code and a flexible working environment. Opportunities for professional development in an engaging, fast-paced environment. Medical insurance covering self and family up to 4 lakhs per person. Diverse and multicultural work environment. Be part of an innovation-driven culture with ample support and resources to succeed. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity : Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About The Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills And Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your Skills & Experience: Strong expertise in Data Engineering highly recommended. • Overall experience of 4+years of relevant experience in Big Data technologies • Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage. • Strong experience in at least of the programming language Java, Scala, Python. Java preferable • Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc. • Well-versed and working knowledge with data platform related services on Azure/GCP. • Bachelor’s degree and year of work experience of 4+ years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.