Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 2.0 years
0 Lacs
India
On-site
We’re Hiring: Data Engineer About The Job Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0- 2 years Responsibilities Job Description Design, develop and maintain reliable automated data solutions based on the identification, collection and evaluation of business requirements. Including but not limited to data models, database objects, stored procedures and views. Developing new and enhancing existing data processing (Data Ingest, Data Transformation, Data Store, Data Management, Data Quality) components Support and troubleshoot the data environment (including periodically on call) Document technical artifacts for developed solutions Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills Proactive approach to problem solving with effective influencing skills Familiar with Agile practices and methodologies Education And Experience Requirements Four-year degree in Information Systems, Finance / Mathematics, Computer Science or similar 0-2 years of experience in Data Engineering REQUIRED KNOWLEDGE, SKILLS Or ABILITIES Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience a plus Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modelling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Talend, Apache Airflow, Azure Data Factory, and BI tools like Tableau preferred Analyse data models We are looking for a Senior Data Engineer for the Enterprise Data Organization to build and manage data pipeline (Data ingest, data transformation, data distribution, quality rules, data storage etc.) for Azure cloud-based data platform. The candidate will require to possess strong technical, analytical, programming and critical thinking skills. Show more Show less
Posted 6 days ago
1.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Mechanical Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Mechanical Engineer, you will design, analyze, troubleshoot, and test electro-mechanical systems and packaging. Qualcomm Engineers collaborate across functions to provide design information and complete project deliverables. Minimum Qualifications: Bachelor's degree in Mechanical Engineering or related field. Job Overview: The successful candidate will operate as a member of Corporate Engineering Hyderabad department. Responsibilities include working with US and India teams to perform thermal and structural analysis of high-performance electronic assemblies. Specific tasks include daily use of thermal and structural analysis software, taking concept layouts from the design team, creating representative analytical models, defining boundary and loading conditions, running simulations, analyzing results, and making recommendations for optimization of designs. The Engineer will interface with internal staff and outside partners in the fast-paced execution of a variety of multi-disciplined projects. Minimum Qualifications : Bachelor's / Master’s degree in Mechanical/Thermal/ Electronic Engineering or a related field. 1-3 years actively involved in thermal and structural engineering analysis of high-density electronics packaging. Strong background in heat transfer fundamentals with a good understanding of electronics cooling technologies (passive and active). Knowledge of packaging technologies, electromechanical design, and thermal management materials. Analysis tools experience utilizing Flotherm,XT, Icepak, 6SigmaET, Celsius EC, Ansys, Abaqus, or equiv. Solid modeling experience utilizing Pro/E or Solidworks mechanical CAD system. Proven ability to work independently and collaboratively within a cross-functional team environment. Strong technical documentation skills and excellent written and verbal communication. Preferred Qualifications: Expected to possess a strong understanding of mechanical engineering and analysis fundamentals. Experience creating thermal and structural models of electronic circuit components, boards, and enclosures. Experience applying environmental spec conditions to analytical model boundary and loading conditions. Experience working with HW team for component, board, and system thermal power estimates. Experience specifying appropriate fans and heat sinks for electronic assemblies. Experience working with design teams for optimization based on analysis results. Demonstrated success in working with HW teams for appropriate thermal mitigation techniques. Proficiency with thermal testing (e.g.,LabVIEW, thermocouples, airflow measurements, thermal chambers, J-TAG) for computer hardware. Understands project goals and individual contribution toward those goals. Effectively communicates with project peers and engineering personnel via e-mail, web meetings, and instant messaging including status reports and illustrative presentation slides. Excellent verbal and written communication skills Interact and collaborate with other internal mechanical and electronics engineers for optimal product development processes and schedule execution. Effectively multitasks and meets aggressive schedules in a dynamic environment. Prepare and deliver design reviews to project team. Education Requirements: Bachelor's / Master’s degree in Mechanical/Thermal /Electronic Engineering or a related field. Keywords: Thermal analysis, electronics cooling, Flotherm, XT, Icepak, Celsius EC, thermal testing, thermal engineering, mechanical engineering. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075823 Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3. Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Qualifications, Skills & Experience 5+ years of experience in Data Engineering or Python based application development on AWS (Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Bachelor’s degree in computer science, Information Systems, Data Engineering, or a related field (or equivalent experience). Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask. Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab. Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflake’s advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less
Posted 6 days ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities: Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 6 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. About the Role: We are looking for a highly skilled and creative Data Scientist to join our growing team and help drive data-informed decisions across our entertainment platforms. You will leverage advanced analytics, machine learning, and predictive modeling to unlock insights about our audience, content performance, and product engagement—ultimately shaping the way millions of people experience entertainment. Key Responsibilities: Develop and deploy machine learning models to solve key business problems (e.g., personalization, recommendation systems, churn prediction). Analyze large, complex datasets to uncover trends in content consumption, viewer preferences, and engagement behaviors. Partner with product, marketing, engineering, and content teams to translate data insights into actionable strategies. Design and execute A/B and multivariate experiments to evaluate the impact of new features and campaigns. Build dashboards and visualizations to monitor key metrics and provide stakeholders with self-service analytics tools. Collaborate on the development of audience segmentation, lifetime value modeling, and predictive analytics. Stay current with emerging technologies and industry trends in data science and entertainment. Qualifications: Master’s or PhD in Computer Science, Statistics, Mathematics, Data Science, or related field. 1+ years of experience as a Data Scientist, ideally within media, streaming, gaming, or entertainment tech. Proficiency in programming languages such as Python or R. Strong SQL skills and experience working with large-scale datasets and data warehousing tools (e.g., Snowflake, BigQuery, Redshift). Experience with machine learning libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Solid understanding of experimental design and statistical analysis techniques. Ability to clearly communicate complex technical findings to non-technical stakeholders. Preferred Qualifications: Experience building recommendation engines, content-ranking algorithms, or personalization models in an entertainment context. Familiarity with user analytics tools such as Mixpanel, Amplitude, or Google Analytics. Prior experience with data pipeline and workflow tools (e.g., Airflow, dbt). Background in natural language processing (NLP), computer vision, or audio analysis is a plus. Why Join Us: Shape the future of how audiences engage with entertainment through data-driven storytelling. Work with cutting-edge technology on high-impact, high-visibility projects. Join a collaborative team in a dynamic and fast-paced environment where creativity meets data science. Show more Show less
Posted 6 days ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Built systems that power B2B SaaS products? Want to scale them for real-world impact? Our client is solving some of the toughest data problems in India powering fintech intelligence, risk engines, and decision-making platforms where structured data is often missing. Their systems are used by leading institutions to make sense of complex, high-velocity datasets in real time. We’re looking for a Senior Data Engineer who has helped scale B2B SaaS platforms, built pipelines from scratch, and wants to take complete ownership of data architecture and infrastructure decisions. What You'll Do: Design, build, and maintain scalable ETL pipelines using Python , PySpark , and Airflow Architect ingestion and transformation workflows using AWS services like S3 , Lambda , Glue , and EMR Handle large volumes of structured and unstructured data with a focus on performance and reliability Lead data warehouse and schema design across Postgres , MongoDB , DynamoDB , and Elasticsearch Collaborate cross-functionally to ensure data infrastructure aligns with product and analytics goals Build systems from the ground up and contribute to key architectural decisions Mentor junior team members and guide implementation best practices You’re a Great Fit If You Have: 3 to 7 years of experience in data engineering , preferably within B2B SaaS/AI environments ( mandatory ) Strong programming skills in Python and experience with PySpark , and Airflow Strong expertise in designing, building and deploying data pipelines in product environments Mandatory experience in NoSQL databases Hands-on with AWS data services and distributed data processing tools like Spark or Dask Understanding of data modeling , performance tuning , and database design Experience working in fast-paced, product-driven teams and have seen the 0 to 1 journey Awareness of async programming and how it applies in real-world risk/fraud use cases Experience mentoring or guiding junior engineers is preferred Role Details: Location: Mumbai (On-site WFO) Experience: 3 to 7 years Budget: 20 to 30 LPA (Max) Notice Period: 30 days or less If you're from a B2B SaaS background and looking to solve meaningful, large-scale data problems we’d love to talk. Apply now or reach out directly to learn more. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About ZenDot ZenDot is a cutting-edge technology company building AI-driven solutions that power the next generation of productivity, intelligence, and automation for businesses. Our focus lies in delivering enterprise-grade tools that combine large language models, real-time data, and deep integrations across knowledge ecosystems. We're building a state-of-the-art internal platform for enterprise semantic search, secure document retrieval, and intelligent knowledge graphs . To lead this mission, we are hiring a Senior AI Engineer to architect and implement a search and knowledge engine inspired by world-class products like Glean — but tailored to our own innovation roadmap. Key Responsibilities Lead the end-to-end design and implementation of an enterprise semantic search engine with hybrid retrieval capabilities. Build robust, scalable data ingestion pipelines to index content from sources like Google Workspace, Slack, Jira, Confluence, GitHub, Notion, and more. Design and optimize a reranking and LLM augmentation layer to improve the quality and relevance of search results. Construct an internal knowledge graph mapping users, documents, metadata, and relationships to personalize responses. Implement permission-aware access filters , ensuring secure and role-based query results across users and teams. Collaborate on a modular AI orchestration layer , integrating search, chat, summarization, and task triggers. Maintain model benchmarks, A/B testing frameworks, and feedback loops for continuous learning and improvement. Work closely with product, security, infra, and frontend teams to deliver high-performance and compliant AI solutions . Require Skills & Experience 3+ years of experience in AI/ML engineering with deep expertise in information retrieval (IR) , NLP , and vector search . Strong understanding and hands-on work with BM25, vector stores (Faiss, Weaviate, Vespa, Elasticsearch) . Proficiency in transformer-based models (BERT, RoBERTa, OpenAI embeddings) and document embedding techniques . Experience in building hybrid search pipelines (sparse + dense), rerankers, and multi-modal retrieval systems. Skilled in Python , PyTorch/TensorFlow , and data engineering frameworks (Airflow, Spark, etc.). Familiar with RBAC systems, OAuth2 , and enterprise permissioning logic. Hands-on with graph data structures or knowledge graph tools like Neo4j, RDF, or custom DAG engines. Cloud-native architecture experience (AWS/GCP), Kubernetes, and microservices best practices. Bonus Points For Building or contributing to open-source IR/NLP/search frameworks (e.g., Haystack, Milvus, LangChain). Past work with LLM-driven RAG (Retrieval-Augmented Generation) systems. Familiarity with document-level compliance, access auditing, and SAML/SCIM integrations. Ability to work in fast-paced, zero-to-one product environments with deep ownership. Show more Show less
Posted 6 days ago
4.0 - 5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Role: Snowflake Data Engineer Location: Delhi NCR/Bangalore Year of experience: 4 to 5 Years (Don't apply who less than 4 years) Full Time position Qualifications Experience with Snowflake, DBT (Data Build Tool), and Airflow Location- Delhi/NCR and Bangalore Strong DE with good analytical skills. Skills - hands on DBT, Snowflake and good to have Airflow skills. Good understanding of data architecture- star schema, snowflake schema, fact, and dimensions, data warehousing basics, data models etc. Show more Show less
Posted 6 days ago
7.0 - 9.0 years
0 Lacs
New Delhi, Delhi, India
On-site
The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 6 days ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5 to 9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake , including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow , or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestion Why Join Diacto Technologies? Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions Competitive salary and benefits with a collaborative work environment in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrcIhRQqJKDXiCEfrQG8Rtsk46Etg4-K8eiwqJ_GELL6ewSC9vl4BjaTwUAHzXZTE3nOtgaiQLCso_vWzieLkoV9Nw==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Snowflake) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Design and develop a modular, scalable AI platform to serve foundation model and RAG-based applications. Build pipelines for embedding generation , document chunking , and indexing . Develop integrations with vector databases like Pinecone , Weaviate , Chroma , or FAISS . Orchestrate LLM flows using tools like LangChain , LlamaIndex , and OpenAI APIs . Implement RAG architectures to combine generative models with structured and unstructured knowledge sources. Create robust APIs and developer tools for easy adoption of AI models across teams. Build observability and monitoring into AI workflows for performance, cost, and output quality. Collaborate with DevOps, Data Engineering, and Product to align platform capabilities with business use cases. Core Skill Set: Strong experience in Python, with deep familiarity in ML/AI frameworks (PyTorch, Hugging Face, TensorFlow). Experience building LLM applications , particularly using LangChain , LlamaIndex , and OpenAI or Anthropic APIs . Practical understanding of vector search , semantic retrieval , and embedding models . Familiarity with AI platform tools (e.g., MLflow, Kubernetes, Airflow, Prefect, Ray Serve). Hands-on with cloud infrastructure (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Solid grasp of RAG architecture design , prompt engineering , and model evaluation . Understanding of MLOps, CI/CD, and data pipelines in production environments. Preferred Qualifications: Experience designing and scaling internal ML/AI platforms or LLMOps tools. Experience with fine-tuning LLMs or customizing embeddings for domain-specific applications. Contributions to open-source AI platform components. Knowledge of data privacy, governance, and responsible AI practices. What You’ll Get: A high-impact role building the core AI infrastructure of our company. Flexible work environment and competitive compensation. Access to cutting-edge foundation models and tooling. Opportunity to shape the future of applied AI within a fast-moving team. Show more Show less
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Scope: Responsible for creating, monitoring and maintaining various databases - MySQL, Postgres, Oracle, ClickHouse including NoSQL databases like Cassandra, Mongo DB, Scylla DB etc. Automating routine Database Administration, monitoring and alerting activities using shell scripting, python, Perl etc. Providing DB design solutions, hands-on expert using SQL and other DB related tools. Providing solutions for Database High Availability (HA), Data Security, Governance, compliance measures etc. You’ll be Responsible for? Ensure optimal health, integrity, availability, performance, and security of all databases. Develop and maintain data categorization and security standards. Evaluate and recommend new database technologies and management tools; optimize existing and future technology investments to maximize returns. Provide day-to-day support to internal IT support groups, external partners, and customers as required. Manage outsourced database administration services to perform basic monitoring and administrative-level tasks as directed. Participate in change and problem management activities, root cause analysis, and development of knowledge articles to support the organization’s program. Support application testing and production operations. Serve as database administration. Document, monitor, test, and adjust backup and recovery procedures to ensure important data is available in a disaster scenario. Serve as On-Call database administrator on a rotating basis. Develop, Implement, and Maintain MySQL, PostgreSQL, Mongo, ClickHouse, Cassandra, Scylla DB and Oracle Instances including automated scripts for monitoring and maintenance of individual databases. Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability. Collaborating with various teams to install Database software updates, patches, version upgrades when required. Performance tuning of Databases, SQL Query tuning, optimizing database designs. Knowledge of schedulers - Cron, or any other new generation schedulers like Apache Airflow Provide subject matter expertise to internal and external project teams, applications developers, and others as needed. Support application testing and production operations. Responsible for implementation and ongoing administration of Data Pipelines What you’d have? B.E./B.Tech/MCA from a premier institute 5-9 years of experience in managing enterprise databases Knowledge and Skills Expert on any three of the databases (covering minimum 1 from SQL and NoSQL database family each) - Oracle, MySQL, PostgreSQL, ClickHouse DB & NoSQL like Mongo DB, Cassandra, Scylla DB, Redis, Aerospike etc. Installing MySQL, PostgreSQL, ClickHouse DB , Oracle , Cassandra, Scylla DB & Mongo DB. Backup and Recovering Oracle, MySQL, Mongo DB, ClickHouse DB Cassandra, Scylla DB and PostgreSQL databases. User level Access: Risks & Threats. Synchronous and Asynchronous replication, converged systems, partitioning, and storage-as-a-service (cloud technologies) Linux operating systems (RHEL, Ubuntu, CentOS), including shell scripting. Windows Server operating system Industry-leading database monitoring tools and platforms Data integration techniques, platforms, and tools Modern database backup technologies and strategies Why join us? Impactful Work : Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities : Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less
Posted 6 days ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 6 days ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID - Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What Experience You Need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
Remote
The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. Position : Data Scientist Location: Trivandrum (Remote or Hybrid ) Type: Full-time Start Date: Immediate Company : Turilytix.ai About the Role : Join us as a Data Scientist and work on challenging ML problems across paper manufacturing , retail, food, and IT infrastructure. Use real-world data to drive predictive intelligence with BIG-AI . Responsibilities : • Clean, engineer, and model sensor & telemetry data • Build ML models for prediction and classification • Develop explainability using SHAP, LIME • Collaborate with product/engineering to operationalize models Required Skills : • Python, Pandas, Scikit-learn • Time-series & anomaly detection • SHAP / LIME / interpretable ML • SQL, Jupyter Notebooks • Bonus: DVC, Git, Airflow Why Work With Us : • Hands-on with real-world sensor data • No red tape just impact • Remote work and global deployment • Drive AI adoption without complexity Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business impSQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, powerBI) Email your resume/GitHub: hr@turilytix.ai Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: · BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. · 5+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. · Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. · Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. · Design and development experience with Kafka, Real time ETL pipeline, API is desirable. · Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. · Certifications in cloud platform (GCP Professional Data Engineer) is a plus. · Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. · Strong Object-Oriented Programming skills and design patterns. · Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). · Good knowledge and experience with configuration management tools like GitHub · Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. · Looks proactively beyond the obvious for continuous improvement opportunities. · Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 6 days ago
0.0 - 2.0 years
0 Lacs
Raipur, Chhattisgarh
On-site
Company Name- Interbiz Consulting Pvt Ltd Position/Designation- Data Engineer Job Location- Raipur (C.G.) Mode- Work from office Experience- 2 to 5 Years We are seeking a talented and detail-oriented Data Engineer to join our growing Data & Analytics team. You will be responsible for building and maintaining robust, scalable data pipelines and infrastructure to support data-driven decision-making across the organization. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory , Databricks , or Apache Spark . Work with Azure Blob Storage , Data Lake , and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka , Apache Flink , or Apache Beam . Build and schedule jobs using orchestration tools like Apache Airflow or Dagster . Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake . Manage data cataloging and lineage using tools like Marquez or Collibra . Collaborate with DevOps teams to containerize solutions using Docker , manage infrastructure with Terraform , and deploy on Kubernetes . Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills and Qualifications Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. [1–5+] years of experience in data engineering or related roles. Proficiency in Python , with strong knowledge of OOP and data structures & algorithms . Comfortable working in Linux environments for development and deployment. Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka , Flink , or Beam . Hands-on experience with Airflow , Dagster , or similar orchestration tools. Deep experience with Microsoft Azure , especially Azure Data Factory , Blob Storage , Synapse , Azure Functions , etc. AZ-900 or other Azure certifications are a plus. Knowledge of dimensional modeling , Snowflake , Apache Iceberg , and Delta Lake . Understanding of modern Lakehouse architecture and related best practices. Familiarity with Marquez , Collibra , or other cataloging tools. Experience with Terraform , Docker , Kubernetes , and Jenkins or equivalent CI/CD tools. Proficiency in setting up dashboards and alerts with Prometheus and Grafana . Interested candidates may share their CV on swapna.rani@interbizconsulting.com or visit www.interbizconsulting.com Note:- Immediate joiner will be preferred. Job Type: Full-time Pay: From ₹25,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Supplemental Pay: Yearly bonus Application Question(s): Do you have at least 2 years of work experience in Python? Do you have at least 2 years of work experience in Data Science? Are you from Raipur, Chhattisgarh? Are you willing to work for more than 2 years? What is your notice period? What is your current salary and what you are expecting? Work Location: In person
Posted 6 days ago
0.0 - 5.0 years
0 Lacs
Sanand, Gujarat
On-site
HR Contact No. 6395012950 Job Title: Design Engineer – HVAC Manufacturing Location: Gujarat Department: Engineering/Design Reports To: MD Job Type: Full-Time Position Overview: We are seeking a talented and detail-oriented Design Engineer to join our engineering team in a dynamic HVAC manufacturing environment. The ideal candidate will have a strong background in mechanical design, proficiency in AutoCAD , and hands-on experience with nesting software for sheet metal fabrication. This role is critical to the development and production of high-quality HVAC components and systems, supporting product design, customization, and manufacturing optimization. Key Responsibilities: Design HVAC components and assemblies using AutoCAD/Nesting based on project specifications. Create and manage detailed 2D and 3D drawings, BOMs, and technical documentation. Prepare nesting layouts using nesting software for sheet metal cutting operations. Collaborate with production and fabrication teams to ensure manufacturability and cost-efficiency of designs. Modify and improve existing designs to meet performance and production requirements. Work with the Customers and Sales team to develop a quotable/manufacturing solution to the customer request. Ensure timely output drawings for customer approval Participate in new product development and R&D initiatives. Visiting Project sites as per requirement. Ensure all designs comply with industry standards and company quality procedures. Assist in resolving manufacturing and assembly issues related to design. Required Qualifications: Diploma or Bachelor's Degree in Mechanical Engineering, Manufacturing Engineering, or related field. Minimum of 2–5 years of experience in a design engineering role within a manufacturing environment, preferably HVAC. Proficiency in AutoCAD (2D required, 3D is a plus). Hands-on experience with nesting software (e.g., SigmaNEST, NestFab, or similar). Solid understanding of sheet metal fabrication processes and design principles. Strong analytical, problem-solving, and communication skills. Ability to interpret technical drawings and specifications. Experience working in a cross-functional team environment. Preferred Qualifications: Familiarity with HVAC system components and airflow principles. Experience with additional CAD/CAM software (e.g., SolidWorks, Inventor). Knowledge of lean manufacturing or value engineering practices. Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Work Location: In person
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Title: Data Engineer Location: Baner, Pune (Hybrid) 6 to 12 Months contract Responsibilities: Design, develop, and execute robust scalable data pipelines to extract, transform, and load data from on-premises SQL Server databases to GCP Cloud SQL PostgreSQL. Analyze existing SQL Server schemas, data types, and stored procedures, and plan for their conversion and optimization for the PostgreSQL environment. Implement and support data migration strategies from on-premise or legacy systems to cloud environments, primarily GCP. Implement rigorous data validation and quality checks before, during, and after migration to ensure data integrity and consistency. Collaborate closely with Database Administrators, application developers, and business analysts to understand source data structures and target requirements. Develop and maintain scripts (primarily Python or Java) for automating migration tasks, data validation, and post-migration data reconciliation. Identify and resolve data discrepancies, performance bottlenecks, and technical challenges encountered during the migration process. Document migration strategies, data mapping, transformation rules, and post-migration validation procedures. Support cutover activities and ensure minimal downtime during the transition phase. Apply data governance, security, and privacy standards across data assets in the cloud. Refactor SQL Server stored procedures and business logic for implementation in PostgreSQL or application layer where applicable. Leverage schema conversion tools (e.g., pgLoader, custom scripts) to automate and validate schema translation from SQL Server to PostgreSQL. Develop automated data validation and reconciliation scripts to ensure row-level parity and business logic integrity post-migration. Implement robust monitoring, logging, and alerting mechanisms to ensure pipeline reliability and quick failure resolution using GCP-native tools (e.g., Stackdriver/Cloud Monitoring). Must-Have Skills: Expert-level SQL proficiency across T-SQL (SQL Server) and PostgreSQL with strong hands-on experience in data transformation, query optimization, and relational database design. Solid understanding and hands-on experience working with Relational Databases. Strong experience in data engineering, with hands-on work on cloud, preferrably GCP. Experience with data migration techniques and strategies between different relational database platforms. Hands-on experience on any Cloud Data and Monitoring services such as Relational Database services, Data Pipeline services, Logging and monitoring services, - with one of the cloud providers - GCP, AWS or Azure. Experience with Python or Java for building and managing data pipelines with proficiency in data manipulation, scripting, and automation of data processes. Familiarity with ETL/ELT processes and orchestration tools like Cloud Composer (Airflow). Understanding of data modeling and schema design. Strong analytical and problem-solving skills, with a keen eye for data quality and integrity Experience with version control systems like Git. Good-to-Have Skills Exposure to database migration tools or services (e.g., AWS DMS, GCP Database Migration Service, or similar). Experience with real-time data processing using Pub/Sub. Experience with shell scripting. Exposure to CI/CD pipelines for deploying and maintaining data workflows. Familiarity with NoSQL databases and other GCP data services (e.g., Firestore, Bigtable). Show more Show less
Posted 6 days ago
3.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience 3 to 10 Years Required Qualifications: Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake, with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional). Intermediate experience with SAP Data Services (SAP DS), including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M. Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC), data security features, and general data privacy practices like GDPR. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files. Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms. Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Skills: workflow orchestration,git,airflow,sql,gcs,elt pipelines,azure data lake,data modeling,ci/cd,dbt,cloud storage,ci,snowflake,data security,python,sap data services,data engineering,aws s3 Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
India
Remote
AI/ML Engineer – Senior Consultant AI Engineering Group is part of Data Science & AI Competency Center and is focusing technical and engineering aspects of DS/ML/AI solutions. We are looking for experienced AI/ML Engineers to join our team to help us bring AI/ML solutions into production, automate processes, and define reusable best practices and accelerators. Duties description: The person we are looking for will become part of DataScience and AI Competency Center working in AI Engineering team. The key duties are: Building high-performing, scalable, enterprise-grade ML/AI applications in cloud environment Working with Data Science, Data Engineering and Cloud teams to implement Machine Learning models into production Practical and innovative implementations of ML/AI automation, for scale and efficiency Design, delivery and management of industrialized processing pipelines Defining and implementing best practices in ML models life cycle and ML operations Implementing AI/MLOps frameworks and supporting Data Science teams in best practices Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations Gathering technical requirements & estimating planned work Presenting solutions, concepts and results to internal and external clients Being Technical Leader on ML projects, defining task, guidelines and evaluating results Creating technical documentation Supporting and growing junior engineers Must have skills: Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures Good understanding of Cloud concepts and architectures as well as working knowledge with selected cloud services, preferably GCP Experience in programming ML algorithms and data processing pipelines using Python At least 6-8 years of experience in production ready code development Experience in designing and implementing data pipelines Practical experience with implementing ML solutions on GCP Vertex.AI and/or Databricks Good communication skills Ability to work in team and support others Taking responsibility for tasks and deliverables Great problem-solving skills and critical thinking Fluency in written and spoken English. Nice to have skills & knowledge: Practical experience with other programming languages: PySpark, Scala, R, Java Practical experience with tools like AirFlow, ADF or Kubeflow Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab or Azure DevOps) Experience in applying and/or defining software engineering best practices Experience productization ML solutions using technologies like Docker/Kubernetes We Offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Internal Gallup Certified Strengths Coach to support your growth. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Please click on this link to submit your application: https://system.erecruiter.pl/FormTemplates/RecruitmentForm.aspx?WebID=ac709bd295cc4008af7d0a7a0e465818 Show more Show less
Posted 6 days ago
3.0 - 7.0 years
3 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Design and develop data pipelines for Generative AI projects by leveraging a combination of technologies, including Vector DB, Graph DB, Airflow, Spark, PySpark, Python, LangChain, AWS Functions, Redshift, and SSIS. This will involve the logical and efficient integration of these tools to create seamless, high-performance data flows that efficiently support the data requirements of our cutting-edge AI initiatives. Collaborate with data scientists, AI researchers, and other stakeholders to understand data requirements and translate them into effective data engineering solutions. User will be managing movement, organization and quality assessments of large set of data to facilitate the creation of Knowledge base for RAG systems and model training Demonstrate familiarity with data integration services such as AWS Glue and Azure Data Factory, showcasing the ability to effectively utilize these platforms for seamless data ingestion, transformation, and orchestration across various sources and destinations. Possess proficiency in constructing data warehouses and data lakes, demonstrating a strong foundation in organizing and consolidating large volumes of structured and unstructured data for efficient storage, retrieval, and analysis. Optimize and maintain data pipelines to ensure high-performance, reliable, and scalable data processing. Develop and implement data validation and quality assurance procedures to ensure the accuracy and consistency of the data used in Generative AI projects. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement improvements as necessary. Stay current with emerging trends and technologies in the fields of data engineering, Generative AI, and related areas to ensure the continued success of our projects. Collaborate with team members on documentation, knowledge sharing, and best practices for data engineering within a Generative AI context. Ensure data privacy and security compliance in accordance with industry standards and regulations. Qualifications we seek in you: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Strong experience with data engineering technologies, including Vector DB, Graph DB, Airflow, Spark, PySpark, Python, langchain, AWS Functions, Redshift, and SSIS. Strong understanding of data warehousing concepts, ETL processes, and data modeling. Strong understanding of S3 and code-based scripting to move large volumes of data across application storage layers Familiarity with Generative AI concepts and technologies, such as GPT-4, Transformers, and other natural language processing techniques. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. Preferred Qualifications/ skills Knowledge of cloud computing platforms, such as AWS, Azure, or Google Cloud Platform, is a plus. Experience with big data technologies, such as Hadoop, Hive, or Presto, is a plus. Familiarity with machine learning frameworks, such as TensorFlow or PyTorch, is a plus. A continuous learning mindset and a passion for staying up-to-date with the latest advancements in data engineering and Generative AI.
Posted 6 days ago
3.0 - 7.0 years
3 - 7 Lacs
Delhi, India
On-site
Design and develop data pipelines for Generative AI projects by leveraging a combination of technologies, including Vector DB, Graph DB, Airflow, Spark, PySpark, Python, LangChain, AWS Functions, Redshift, and SSIS. This will involve the logical and efficient integration of these tools to create seamless, high-performance data flows that efficiently support the data requirements of our cutting-edge AI initiatives. Collaborate with data scientists, AI researchers, and other stakeholders to understand data requirements and translate them into effective data engineering solutions. User will be managing movement, organization and quality assessments of large set of data to facilitate the creation of Knowledge base for RAG systems and model training Demonstrate familiarity with data integration services such as AWS Glue and Azure Data Factory, showcasing the ability to effectively utilize these platforms for seamless data ingestion, transformation, and orchestration across various sources and destinations. Possess proficiency in constructing data warehouses and data lakes, demonstrating a strong foundation in organizing and consolidating large volumes of structured and unstructured data for efficient storage, retrieval, and analysis. Optimize and maintain data pipelines to ensure high-performance, reliable, and scalable data processing. Develop and implement data validation and quality assurance procedures to ensure the accuracy and consistency of the data used in Generative AI projects. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement improvements as necessary. Stay current with emerging trends and technologies in the fields of data engineering, Generative AI, and related areas to ensure the continued success of our projects. Collaborate with team members on documentation, knowledge sharing, and best practices for data engineering within a Generative AI context. Ensure data privacy and security compliance in accordance with industry standards and regulations. Qualifications we seek in you: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Strong experience with data engineering technologies, including Vector DB, Graph DB, Airflow, Spark, PySpark, Python, langchain, AWS Functions, Redshift, and SSIS. Strong understanding of data warehousing concepts, ETL processes, and data modeling. Strong understanding of S3 and code-based scripting to move large volumes of data across application storage layers Familiarity with Generative AI concepts and technologies, such as GPT-4, Transformers, and other natural language processing techniques. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. Preferred Qualifications/ skills Knowledge of cloud computing platforms, such as AWS, Azure, or Google Cloud Platform, is a plus. Experience with big data technologies, such as Hadoop, Hive, or Presto, is a plus. Familiarity with machine learning frameworks, such as TensorFlow or PyTorch, is a plus. A continuous learning mindset and a passion for staying up-to-date with the latest advancements in data engineering and Generative AI.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.