Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
10 - 18 Lacs
Hyderabad
Hybrid
About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.
Posted 2 days ago
8.0 - 12.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration
Posted 2 days ago
5.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Required Qualifications - B/Tech or M/Tech in Computer Science/ Data Science from a reputed educational institute. - 5 to 8 years of proven relevant experience in data architecture & AI/ML working for a good MNC company. - Work experience/Knowledge of the healthcare domain is highly desirable Apply machine learning, deep learning & Gen AI tools, focusing on doing feasibility studies, developing proof of concept implementaons, and establishing robust soluon blueprint for subsequent implementaon by Engineering teams. - Keep track of the latest emerging technologies, idenfy data trends, and perform complex data analysis and analycs using AI /ML & Gen AI. - Use Generave AI (GEN-AI), predicve and prescripve analycs to generate innovave ideas and soluons. - Yoke NLP and computer vision techniques to extract insights from unstructured clinical/healthcare data. - Apply core data modeling, data engineering principles and pracces, leveraging large datasets to design AI powered soluons. - Perform data analysis and data visualizaon to deliver data-driven insights. - Develop architecture blueprints, strategies for building robust AI powered soluons leveraging large datasets. - Define best pracces, pracces, standards for developing and deploying AI models to producon environments using MLOps. - Providing technical guidance and support to Data Engineers, Business users, and soware applicaon team members - Maintaining documentaon and ensuring compliance with relevant AI governance, regulaons, and standards. - Communicate effecvely with both technical and non-technical stakeholders. - Collaborate with cross-funconal teams, including business stakeholders, data sciensts, data engineers, and IT teams, to define future data management architecture, aligning with value creaon opportunies across the business. Competencies: - Good understanding of strategic and emerging technology trends and the praccal applicaon of exisng and emerging technologies to new and evolving business and operang models, especially AI/ML, GEN AI. - Ability to develop architecture blueprints, strategies. - Ability to effecvely communicate and present to senior-level execuves and technical audiences. - Proficiency in architecng soluons using Cloud Nave approach. - Well versed with Data modelling techniques and tools (Star Schema (de-normalized data model), Transaconal Model (Normalized data model) - Proven skills in building data systems that are reliable, fault tolerant, performing and at the same me equally economical from a cost perspecve. - Listens to the ideas, concerns of stakeholders, and develops an understanding of how their ideas relate to others - and acts to address concerns. - Idenfies risks and develops risk management plans/processes and successfully implements/operates them. - Possess a deep personal movaon to develop new efficient, effecve, and valuable ways to accomplish team tasks. - Demonstrates crical\out of box thinking and the ability to look at problems from different points of view and tries to find soluons that meet the needs of those outside of the team including staff and members. - Proacvely experiments with new digital technologies and be a technology evangelist sharing the key learning with colleagues.
Posted 2 days ago
5.0 - 8.0 years
25 - 30 Lacs
Pune, Gurugram, Bengaluru
Work from Office
NYU Manager - Owais UR Delivery Manager - Laxmi Title: Senior Data Developer with Strong MS/Oracle SQL, Python Skills and Critical Thinking Description: The EDA team seeks a dedicated and detail-oriented Senior Developer I to join our dynamic team. The responsibility of the successful candidate will be to handle repetitive technical tasks, such as Healthy Planet MS SQL file loads into a data warehouse, monitor Airflow DAGs, manage alerts, and rerun failed processes. Additionally, the role will require the analyst to monitor various daily and weekly jobs, which may include generation of revenue cycle reports and data delivery to external vendors. The perfect candidate will have a robust experience with MS/Oracle SQL, Python, Epic Health Systems, and other relevant technologies. Overview: As a Senior Developer I at NYU EDA team, you will play a vital role to improve the operation of our data load and management processes. Your primary responsibilities will be to ensure the accuracy and timeliness of data loads, maintain the health of data pipelines, and monitor that all scheduled jobs are completed successfully. You will collaborate with cross-functional teams to identify and resolve issues, improve processes, and maintain a high standard of data integrity. Responsibilities: Manage and perform Healthy Planet file loads into a data warehouse. Monitor Airflow DAGs for successful completion, manage alerts, and rerun failed tasks as necessary. Monitor and oversee other daily and weekly jobs, including FGP cash reports and external reports. Collaborate with the data engineering team to streamline data processing workflows. Develop automation scripts to reduce manual intervention in repetitive tasks using SQL and Python. Ensure all data-related tasks are performed accurately and on time. Investigate and resolve data discrepancies and processing issues. Prepare and maintain documentation for processes and workflows. Conduct periodic data audits to ensure data integrity and compliance with defined standards. Skillset Requirements: MS/Oracle SQL Python Data warehousing and ETL processes Monitoring tools such as Apache Airflow Data quality and integrity assurance Strong analytical and problem-solving abilities Excellent written and verbal communication Additional Skillset Familiarity with monitoring and managing Apache Airflow DAGs. Experience: Minimum of 5 years experience in a similar role, with a focus on data management and process automation. Proven track record of successfully managing complex data processes and meeting deadlines. Education: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Certifications: Epic Cogito MS/Oracle SQL, Python, or data management are a plus.
Posted 2 days ago
7.0 - 11.0 years
9 - 12 Lacs
Hyderabad
Work from Office
Data Engineer with 7+ years of hands-on experience in data engineering, specializing in MS SQL Server, T-SQL, and ETL processes. The role involves developing and managing robust data pipelines, designing complex queries, stored procedures, and functions, and implementing scalable ETL solutions using ADF and SSIS. The candidate will work on Azure Cloud DB, CI/CD processes, and version control systems like Git or TFS in a collaborative team environment. The ideal candidate must have excellent communication skills and the ability to work with stakeholders across all levels. This role is on-site in Hyderabad with overlap in India and US timings.
Posted 2 days ago
10.0 - 14.0 years
12 - 16 Lacs
Hyderabad
Work from Office
Proven expert at writing SQL code with at least 10 years of experience. Must have 5+ years of experience working with large data with transactions in the order of 5 10M records. 5+ years of experience modeling loosely coupled relational databases that can store tera or petabytes of data. 3+ years of proven expertise in working with large Data Warehouses. Expert at ETL transformations using SSIS.
Posted 2 days ago
5.0 - 7.0 years
15 - 18 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.
Posted 2 days ago
8.0 - 13.0 years
85 - 90 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field.
Posted 2 days ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 days ago
5.0 - 15.0 years
0 - 28 Lacs
Bengaluru
Work from Office
Key Skills : Python, Pyspark, AWS Glue, Redshift and Spark Steaming, Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. What The requirement is, you can check with the candidate the followings (Vast knowledge of python, pyspark, glue job, lambda, step function, sql) Please find the Data Engineering requirement JD and client expectations from candidate. 1. Process these events and save data in Trusted and refined bucket schemas 2. Bring Six Tables for Historical data to Raw bucket. Populate historical data in trusted and refined bucket schemas. 3. Publish raw, trusted and refined bucket data from #2 and #3 to corresponding buckets in CCB data lake Develop Analytics pipeline to publish data to Snowflake 4. Integrate TDQ/BDQ in the Glue pipeline 5. Develop Observability dashboards for these jobs 6. Implement reliability wherever needed to prevent data loss 7. Configure Data archival policies and periodic cleanup 8. Perform end to end testing of the implementation 9. Implement all of the above in Production 10. Implement Reconcile data across SORs, Auth Data Lake and CCB Data Lake 11. Success criteria is All the 50 Kafka events are ingested in the CCB data lake and existing 16 Tableau dashboards are populated using this data.
Posted 2 days ago
7.0 - 12.0 years
20 - 27 Lacs
Bengaluru
Work from Office
TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Preferred candidate profile
Posted 2 days ago
8.0 - 10.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Databricks Architect Should have minimum of 10+ years of experience Must have skills DataBricks, Delta Lake, pyspark or scala spark, Unity Catalog Good to have skills - Azure and/or AWS Cloud Handson exposure in o Strong experience with the use of databricks as lakehouse solution o Establish the Databricks Lakehouse architecture o To ingest and transform batch and streaming data on the Databricks Lakehouse Platform. o Orchestrate diverse workloads for the full lifecycle including Delta Live Tables, PySpark etc Mandatory Skills: DataBricks - Data Engineering. Experience8-10 Years.
Posted 2 days ago
8.0 - 12.0 years
14 - 18 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and implement scalable data pipelines using ETL/ELT frameworks. Develop and maintain data models and data warehouse architecture using Snowflake. Build and manage DBT (Data Build Tool) models for data transformation and lineage tracking. Write efficient and reusable Python scripts for data ingestion, transformation, and automation. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and governance across all data platforms. Monitor and optimize performance of data pipelines and queries. Implement best practices for data engineering, including version control, testing, and CI/CD. Required Skills and Qualifications: 8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.
Posted 2 days ago
5.0 - 7.0 years
22 - 25 Lacs
Bengaluru
Work from Office
We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance.
Posted 2 days ago
6.0 - 11.0 years
15 - 30 Lacs
Lucknow
Remote
Job Title: Data Engineer (DBT & Airflow) Type: Contract (8 hrs/day) Experience: 6+ years Location: Remote/WFH Duration: 3 - 6 months ( Possibility of extension) Job Summary: We are seeking an experienced Data Engineer with strong expertise in DBT and Apache Airflow to join our team on a contract basis. The ideal candidate will have a proven track record of building scalable data pipelines, transforming raw datasets into analytics-ready models, and orchestrating workflows in a modern data stack. You will play a key role in designing, developing, and maintaining data infrastructure that supports business intelligence, analytics, and machine learning initiatives. Key Responsibilities: Design, build, and maintain robust data pipelines and workflows using Apache Airflow Develop and manage modular, testable, and well-documented SQL models using DBT Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements Implement and monitor data quality checks, alerts, and lineage tracking Work with cloud-based data warehouses such as Snowflake, BigQuery, or Redshift Optimize ETL/ELT processes for performance and scalability Participate in code reviews, documentation, and process improvement initiatives Required Qualifications: 6+ years of professional experience in data engineering or ETL development Strong hands-on experience with DBT (Data Build Tool) for data transformation Proven experience designing and managing DAGs using Apache Airflow Advanced proficiency in SQL and working with cloud data warehouses (Snowflake, BigQuery, Redshift, etc.) Solid programming skills in Python Experience with data modeling, data warehousing, and performance tuning Familiarity with version control systems (e.g., Git) and CI/CD practices Strong problem-solving skills and attention to detail
Posted 2 days ago
5.0 - 10.0 years
12 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Design and implement robust and scalable data pipelines using Azure Data Factory, Azure Data Lake, and Azure SQL. Work extensively with Azure Fabric, CosmosDB, and SQL Server to develop and optimize end-to-end data solutions. Perform Database Design, Data Modeling, and Performance Tuning to ensure system reliability and data integrity. Write and optimize complex SQL queries to support data ingestion, transformation, and reporting needs. Proactively implement SQL optimization and preventive maintenance strategies to ensure efficient database performance. Lead data migration efforts from on-premise to cloud or across Azure services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Maintain clear documentation and follow industry best practices for security, compliance, and scalability. Required Skills : Proven experience working with: Azure Fabric SQL Server Azure Data Factory Azure Data Lake Cosmos DB Strong hands-on expertise in: Complex SQL queries SQL query efficiency and optimization Database design and data modeling Data migration techniques and performance tuning Solid understanding of cloud infrastructure and data integration patterns in Azure. Experience working in agile environments with CI/CD practices. Nice to have :Microsoft Azure certifications related to Data Engineering or Azure Solutions Location - Bengaluru , Hyderabad , Chennai , Pune , Noida , Mumbai.
Posted 2 days ago
6.0 - 11.0 years
15 - 22 Lacs
Hyderabad
Work from Office
Job Opportunity: Senior Data Analyst at GSPANN Company Overview: GSPANN, headquartered in California, U.S.A., is a global consulting and IT services firm. We specialize in helping clients optimize their IT capabilities and operations, particularly in the retail, high-tech, and manufacturing sectors. With 1900+ employees across five global delivery centers, we combine the agility of a boutique consultancy with the scale of a large IT services provider. Role: Senior Data Analyst Experience Required: 5+ Years Location: Pune, Hyderabad , Gurgaon Domain Expertise: Retail (mandatory) Key Skills & Requirements: Bachelors degree in Computer Science, MIS, or related field. 67 years of relevant experience in data analytics. Strong ability to translate strategic vision into actionable insights. Expertise in conducting data analysis, hypothesis testing, and delivering insights independently. Proven experience in defining KPIs for domains like Sales, Consumer Behavior, and Supply Chain. Exceptional SQL skills. Proficiency in visualization tools: Tableau, Power BI, Domo, etc. Familiarity with cloud platforms and big data tools: AWS, Azure, GCP, Hive, Snowflake, Presto. Detail-oriented with a structured problem-solving approach. Excellent verbal and written communication skills. Experience with agile development methodologies. Prior experience in retail or e-commerce domains is a strong plus. How to Apply: Interested candidates can share their CVs at heena.ruchwani@gspann.com
Posted 2 days ago
4.0 - 9.0 years
7 - 16 Lacs
Kozhikode
Work from Office
Role & responsibilities Putting together large, intricate data sets to satisfy both functional and non-functional business needs. Determining, creating, and implementing internal process improvements, such as redesigning infrastructure for increased scalability, improving data delivery, and automating manual procedures. Building necessary infrastructure using AWS and SQL technologies. This will enable effective data extraction, transformation, and loading from a variety of data sources. Reformulating existing frameworks to maximise their functioning. Building analytical tools that make use of the data flow and offer a practical understanding of crucial company performance indicators like operational effectiveness and customer acquisition. Helping stakeholders, including the data, design, product, and executive teams, with technical data difficulties. Working on data-related technical challenges while collaborating with stakeholders, including the Executive, Product, Data, and Design teams, to support their data infrastructure needs. Remaining up-to-date with developments in technology and industry norms can help you to produce higher-quality results.
Posted 2 days ago
10.0 - 12.0 years
30 - 40 Lacs
Hyderabad
Work from Office
We seek a hands-on Delivery Lead with 1012 years of experience to own end-to-end execution of AI/ML projects, mentor technical teams, and collaborate with sales to scale our consulting solutions. Key Responsibilities 1. Project Delivery Own client-facing AI/ML projects from scoping to deployment, ensuring quality, timelines, and business impact. Guide teams on architecture, model design, and deployment (cloud/on-prem). 2. Team Leadership Lead and mentor data scientists/engineers, fostering skill growth and collaboration. Drive best practices in AI/ML (LLMs, MLOps, Agile). 3. Sales Collaboration Partner with sales to scope proposals, estimate efforts, and present technical solutions to clients. Must-Have Skills 10-12 years in AI/ML, with 5+ years leading teams in consulting/client delivery. Expertise in: o AI/ML frameworks (TensorFlow, PyTorch, LLMs). o Cloud platforms (AWS/Azure/GCP) and CI/CD pipelines. o Python/R, data engineering (Spark, Kafka). Strong communication and client/stakeholder management. Good to Have Familiarity with MLOps (Docker, Kubernetes). Agile/Scrum certification. Qualifications Bachelors/Masters in CS, Data Science, or related field. Certifications (e.g., AWS ML, Azure AI Engineer) are a plus.
Posted 2 days ago
0.0 - 1.0 years
0 Lacs
Mumbai Suburban, Mumbai (All Areas)
Work from Office
Role: Data Engineer Location: Mumbai (Goregaon) Working Days: 5 days About us: Crimson Interactive - https://www.crimsoni.com/ We are a technology-driven scientific communications & localization company. Crimson offers a robust ecosystem of services with cutting-edge AI and learning products for researchers, publishers, societies, universities, and government research bodies worldwide. With a global presence, including 9 international offices, we cater to the communication needs of the scientific community and corporates. Crimson Enago flagship products At Crimson Enago we are laser-focused on building AI-powered tools and services that significantly boost the productivity of researchers and professionals. Every researcher or professional goes through the stages of knowledge discovery, knowledge acquisition, knowledge creation, and knowledge dissemination. However, each stage is cognitively heavy and is tightly coupled. In this direction, we have our flagship products Trinka and Enago Read that focus on making all these four stages easy and fast. About Trinka Trinka (www.trinka.ai) is an AI-powered English grammar checker and language enhancement writing assistant designed for academic and technical writing. Built by linguists, scientists, and language lovers, Trinka finds and corrects thousands of complex writing errors so you dont have to. Trinka corrects contextual spelling mistakes, and advanced grammar errors, enhances vocabulary usage, and provides writing suggestions in real-time. Trinka goes beyond grammar to help professionals and academics ensure professional, concise, and engaging writing. With subject-specific correction, Trinka understands the nuances in the expression of each subject and ensures the writing is fit for the subject. Trinka's Enterprise solutions come with unlimited access and great customization options to all of Trinkas powerful capabilities. About Enago Enago Read Enago Read (www.read.enago.com) is the first smart workspace that helps researchers (students, professors, and corporate researchers) to be better and faster in their research projects. Powered with proprietary AI algorithms and a unique approach to solving problems with design and tech, Enago Read is set to be the default workspace for any research-heavy projects. Launched in 2019, the product connects information (Research papers, Blogs, Wiki, Books, Courses, Videos, etc.) to behaviors (reading, writing, annotating, discussing & more), opening up newer insights and opportunities in the academic space that was otherwise not possible (or not imaginable). About the team We are a bunch of passionate researchers, engineers, and designers who came together to build a product that can revolutionize the way any research-intensive projects are done. Reducing cognitive load and helping people to convert information into knowledge, is at the core of our mission. Our engineering team is building a scalable platform that deals with tons of data, AI processing over the data, and interactions of users from across the globe. We believe research plays a key role in making the world a better place, and we want to make it easy to approach and fun to do! We are building a world-class language-related product that has the potential to positively transform lives worldwide. We have a passionate team of data scientists, coders, and linguists who have been working on it. Key responsibilities: Create, build and design data management systems across an entire organization Work with very large data sets (both structured and unstructured). Help data scientist to easily retrieve the needed data for their evaluations and experiments. Design, develop and implement R&D and pre-product prototype solutions Must have strong engineering skills that will help engineering team to productivie NLP/ML algorithms Implement scalable, maintainable, well documented and high-quality solutions. Stay abreast of the new developments in Artificial Intelligence (AI)/Machine Learning (ML). Contribute to the research strategy and technical culture of the team Skills Required: BTech/MTech/ME/MCA from reputed Engineering college. 0-1 years of industry experience Knowledge of Agentic Frameworks Exposure to Prompt Engineering, RAG and Vector Databases Extremely curious and relentless at figuring out solutions to problems Knowledge of Big Data platforms like Hadoop and its eco-system Proficiency in programming languages like Java/C/C++/Python Experience with cloud services Exposure to NLP and its related services. Experience with one or more visualization tools like Tableau, etc. Experience with Docker, Kubernetes, Kafka, Elasticsearch, Lucene Experience with relational or NoSQL databases such as MySQL, MongoDB, Redis, Neo4j. Experience of handling various data types and structures: structured and unstructured data, validating and cleaning data, and measuring evaluation Excellent understanding of machine learning techniques.
Posted 2 days ago
6.0 - 10.0 years
25 - 30 Lacs
Hyderabad
Work from Office
We seek a Senior AI Scientist with strong ML fundamentals and data engineering expertise to lead the development of scalable AI/LLM solutions. You will design, fine-tune, and deploy models (e.g., LLMs, RAG architectures) while ensuring robust data pipelines and MLOps practices. Key Responsibilities 1. AI/LLM Development: o Fine-tune and optimize LLMs (e.g., GPT, Llama) and traditional ML models for production. o Implement retrieval-augmented generation (RAG), vector databases, and orchestration tools (e.g., LangChain). 2. Data Engineering: o Build scalable data pipelines for unstructured/text data (e.g., Spark, Kafka, Airflow). o Optimize storage/retrieval for embeddings (e.g., pgvector, Pinecone). 3. MLOps & Deployment: o Containerize models (Docker) and deploy on cloud (AWS/Azure/GCP) using Kubernetes. o Design CI/CD pipelines for LLM workflows (experiment tracking, monitoring). 4. Collaboration: o Work with DevOps to optimize latency/cost trade-offs for LLM APIs. o Mentor junior team members on ML engineering best practices. Required Skills & Qualifications Education: MS/PhD in CS/AI/Data Science (or equivalent experience). Experience: 6+ years in ML + data engineering, with 2+ years in LLM/GenAI projects.
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Pune
Remote
The candidate must be proficient in Python, libraries and frameworks. Good with Data Modeling, Pyspark, MySQL concepts, Power BI, AWS, Azure concepts Experience in optimizing large transactional DBs Data, visualization tools, Databricks, fast API.
Posted 3 days ago
15.0 - 24.0 years
40 - 60 Lacs
Hyderabad, Chennai
Work from Office
Job Title: Technical Program Manager Data Engineering & Analytics Experience : 16 - 25 Years ( Relevant Years ) Salary : Based on Current CTC Location : Chennai and Hyderabad Notice Period : Immediate Joiners Only. Critical Expectations : 1 ) Candidate should have handled min 100 people Team size. 2) Should Have Min 8 Years experience into Data and AI Development 3) Should have exp in Complex Data Migration in Cloud. Position Overview: We are seeking an experienced Program Manager to lead large-scale, complex Data, BI, and AI/ML initiatives. The ideal candidate will have a deep technical understanding of modern data architectures, hands-on expertise in end-to-end solution delivery, and a proven ability to manage client relationships and multi-functional teams. This role will involve driving innovation, operational excellence, and strategic growth within Data Engineering & Analytics programs. Job Description: Responsible to manage large and complex programs encompassing multiple Data, BI and AI/ML solutions Lead the design, development, and implementation of Data Engineering & Analytics solution involving Teradata, Google Cloud Data Platform (GCP) platform, AI/ML, Qlik, Tableau etc. Work closely with clients in understanding their needs and translating them to technology solutions Provide technical leadership to solve complex business issues that translate into data analytics solutions Prepare operational/strategic reports based on defined cadences and present to steering & operational committees via WSR, MSR etc Responsible for ensuring compliance with defined service level agreements(SLA) and Key performance indicators(KPI) metrics Track and monitor the performance of services, identify areas for improvement, implement changes as needed Continuously evaluate and improve processes to ensure that services are delivered efficiently and effectively Proactive identification of issues and risks, prepare appropriate mitigation/resolution plans Foster positive work environment and build culture of automation & innovation to improve service delivery performance Developing team as coach, mentor, support, and manage team members Creating SOW, Proposals, Solution, Estimation for Data Analytics Solutions Contribute in building Data Analytics, AI/ML practice by creating case studies, POC etc Shaping opportunities and create execution approaches throughout the lifecycle of client engagements Colloborate with various functions/teams in the organization to support recruitment, hiring, onboarding and other operational activities Maintain positive relationship with all stakeholders and ensure proactive response to opportunities and challenges. Must Have Skills : Deep hands-on expertise in E2E solution life cycle management in Data Engineering and Data Management. Strong technical understanding of modern data architecture and solutions Ability to execute strategy for implementations through a roadmap and collaboration with different stakeholders Understanding of Cloud data architecture and data modeling concepts and principles, including Cloud data lakes, warehouses and marts, dimensional modeling, star schemas, real time and batch ETL/ELT Would be good to have experience in driving AI/ML, GenAI projects Experience with cloud-based data analytic platforms such as GCP, Snowflake, Azure etc Good understanding of SDLC and Agile methodologies Would be good to have a Telecom background. Must gave handled team size of 50+ Qualification: 15-20 yrs experience primarily working on Data Warehousing, BI& Analytics, Data management projects as Tech Architect, delivery, client relationship and practice roles - involving ETL, reporting, big data and analytics. Experience architecting, designing & developing Data Engineering, Business Intelligence and reporting projects Experience on working with data management solutions like Data Quality, Metadata, Master Data, Governance. Strong experience in Cloud Data migration programs Focused on value, innovation and automation led account mining Strong Interpersonal, stakeholder management and team building skills
Posted 3 days ago
5.0 - 8.0 years
15 - 22 Lacs
Ahmedabad
Work from Office
Strong proficiency in SQL, database experience – Snowflake preferred • Expertise with python, especially with Python-pandas is must • Experience of Tableau and similar BI Tools (Power BI, etc) is must Required Candidate profile Must have 4+ years' experience with Tableau, SQL, AWS & Python. Must be from Ahmedabad or must be open to relocating to Ahmedabad Experience with data modelling • Experience in AWS environments
Posted 4 days ago
2.0 - 6.0 years
6 - 9 Lacs
Mumbai, Mumbai Suburban, Mumbai (All Areas)
Work from Office
1. We are seeking a Min 3 years of skilled SQL + Python Developer to join our dynamic team. 2. This role involves a mix of database development, administration, and data engineering tasks. 3. Design and implement ETL processes for Data Integration. Required Candidate profile 1.The ideal candidate will have a strong background in SQL, PL/SQL, and Python scripting. 2.Proven expertise in SQL query tuning and database performance optimization ,Snowflake Data Warehouse. Perks and benefits To be disclosed post intrview
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France