Aventior Digital is a digital transformation agency specializing in cutting-edge software solutions, digital marketing, and strategic consulting to enhance business efficiency and customer engagement.
Pune
INR 20.0 - 30.0 Lacs P.A.
Remote
Full Time
Designation: Senior Data Engineer Preferred Experience: 7+ years Responsibilities: Design, develop, and maintain data pipelines for ingesting, processing, and transforming data from various sources into actionable insights. Integrate data from disparate sources (databases, APIs, and files) into a unified data platform using ETL processes and data integration techniques. Design and implement data models, schemas, and data structures to support analytical queries, reporting, and business intelligence needs. Optimize database performance, query execution, and data processing workflows for efficiency, scalability, and reliability. Ensure data quality, integrity, and consistency through data validation, cleansing, deduplication, and error-handling mechanisms. Architect and implement data solutions on Azure cloud platforms, leveraging Azure services for data storage, processing, and analytics. Implement data security measures, encryption techniques, and access controls to protect sensitive data and ensure compliance with regulations (e.g., GDPR, HIPAA). Work closely with cross-functional teams, data scientists, analysts, and stakeholders to understand data requirements, provide data solutions, and communicate insights effectively. Document data processes, workflows, and best practices, and promote data governance standards, data lineage, and metadata management. Stay updated with emerging technologies, industry trends, and best practices in data engineering, cloud computing, and data analytics to drive innovation and continuous improvement. Required Skills: Proficiency in writing complex SQL queries, stored procedures, and functions for data extraction, transformation, and analysis. Experience in database design, optimization, and management using SQL Server/Azure SQL Database. Knowledge of data modeling techniques, including entity-relationship diagrams, dimensional modeling, and data normalization, is needed to design efficient data structures. Familiarity with ETL (Extract, Transform, Load) processes and tools such as Azure Data Factory, SSIS (SQL Server Integration Services), or other data integration platforms. Hands-on experience with Azure cloud services, including Azure SQL Database, Azure Data Factory and Azure Storage. Experience handling large-scale data processing and analytics with strong analytical skills and attention to detail. Strong statistical knowledge. Excellent communication skills. Good To Have Skills: Experience working with C#, .Net Framework Experience with other Azure cloud services and sharepoint Experience with Python as programming language Experience manipulating unstructured data with regular expressions. Basic understanding of machine learning concepts and algorithms for data mining, predictive modeling, and statistical analysis. Knowledge of data warehousing concepts, methodologies, and tools for building and maintaining data warehouses or data marts.
Pune
INR 5.0 - 10.0 Lacs P.A.
Hybrid
Full Time
Job Title: GenAI Engineer Category: Software Development Required Experience: 3-4 years Location: Pune Job Description: As the Data Scientist, you will play a pivotal role in driving data-driven decision-making and advancing our organization's AI and analytical capabilities. You will lead a team of data scientists, collaborate with cross-functional teams, and contribute to the development and implementation of AI and advanced analytics solutions. This position requires a strong combination of technical expertise, leadership skills, and business acumen . Responsibilities: Team Leadership: Lead, mentor, and inspire a team of junior data scientists, fostering a collaborative and innovative work environment. Provide technical guidance, set priorities, and ensure the team's alignment with organizational goals. Conduct regular performance assessments and contribute to professional development plans. Strategy and Planning: Collaborate with stakeholders to understand business objectives and identify opportunities for leveraging data to achieve strategic goals. Develop and execute a data science roadmap, ensuring alignment with overall business and technology strategies. Stay abreast of industry trends, emerging technologies, and best practices in data science. Advanced Analytics: Design, develop, and implement advanced machine learning models and algorithms to extract insights and solve complex business problems. Drive the exploration and application of new data sources, tools, and techniques to enhance analytical capabilities. Collaborate with data engineers to ensure the scalability and efficiency of deployed models. Cross-functional Collaboration: Collaborate with cross-functional teams, including business analysts, software engineers, and domain experts, to integrate data science solutions into business processes. Communicate complex analytical findings to non-technical stakeholders in a clear and actionable manner. Data Governance and Quality: Establish and enforce data governance standards to ensure the accuracy, reliability, and security of data used for analysis. Work with data engineering teams to enhance data quality and integrity throughout the data lifecycle. Project Management: Oversee the end-to-end execution of data science projects, ensuring timelines, budgets, and deliverables are met. Provide regular project updates to stakeholders and manage expectations effectively. Technical Expertise: Provide technical guidance and execution for the latest GenAI technologies, including but not limited to LLM/SLM/VLM and Multi-modal AI Algorithms. Leverage Transformers for complex natural language processing-based tasks. Hands-on experience in RAG (Retrieval-Augmented Generation) pipelines using Pinecone or similar vector databases to enhance LLM performance. Experience building RESTful APIs and ML endpoints using FastAPI , integrating seamlessly with production systems. Apply LangChain and agentic frameworks to integrate LLMs with tools, memory, and reasoning chains. Proficiency in designing LLM-driven workflows , including prompt engineering , chain-of-thought reasoning , and OpenAI API call optimization to reduce latency and cost. Develop scalable event-driven architectures using AWS EventBridge , RDS , and other AWS services (e.g., S3, Glue, SageMaker). Develop and optimize embedding generation , storage, and retrieval processes using tools like OpenAI Embeddings , LangChain , and Pinecone . Lead the development of deep learning technologies like computer vision for image processing, OCR/IDP, object detection and tracking, segmentation, Image generation, Convolutional Neural Networks, Capsule Networks, etc. Development of core Machine Learning algorithms like time series analysis with Neural ODEs; Variational Autoencoders for Image Generation and anomaly detection; Provide oversight for core deep learning algorithms like Neural Architecture Search for optimization and Graph Neural Networks for molecular structures. Qualifications: Minimum 1+ years of experience leading a team of junior data scientists, with a proven track record of successful project implementations. Master's or Ph.D. in a quantitative field (Computer Science, Statistics, Mathematics, etc.) Experience with GenAI, Agentic AI, LLM Training, and LLM-driven workflow development. Knowledge of large multi-modal models is a must. Experience in MLOps, Scientific Machine Learning, Statistical Modeling, and Data Visualization. Proficiency in cloud platforms , particularly AWS (SageMaker, S3, RDS, EventBridge, Glue, Redshift). Must have experience with the development and implementation of various core Machine Learning algorithms mentioned above. Bonus: Prior experience in LLM observability , latency tuning, token usage monitoring, and fine-grained control of OpenAI or similar API integrations. Must have hands-on experience with Deep Learning technologies for computer vision and image processing as well as core neural network applications like optimization. Experience in developing ML, AI, and Data Science solutions and putting solutions in production, with proficiency in Data Engineering, is desirable. Experience in the development and implementation of scalable and efficient data pipelines using AWS services such as SageMaker, S3, Glue, and/or Redshift. Excellent leadership, communication, and interpersonal skills. Experience with big data technologies and cloud platforms is a plus. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded, cross-industry, and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
Pune
INR 15.0 - 20.0 Lacs P.A.
Remote
Full Time
Designation: .Net Engineer Preferred Experience: 7+ years Responsibilities: Engaging directly with Azure developers and IT professionals in forums to answer technical questions and help solve technical problems. Solve highly complex problems, involving broad, in-depth product knowledge or in-depth product specialty; this may include support of additional product lines. Develop code samples, quick-start, and how-to guides to help customers understand complex cloud scenarios. Collaborate with internal teams to produce software design and architecture Write clean, scalable code using .NET programming languages Test and deploy applications and systems Revise, update, refactor, and debug code Improve existing software Develop documentation throughout the software development life cycle (SDLC) Serve as an expert on applications and provide technical support Requirements and skills: Experience with an ASP.NET Core Developer, along with skills in Javascript, CSS, and razor pages. Experience with programming and scripting languages/frameworks such as. net core.NET, ASP.NET, and .NET frameworks. Experience with building data pipelines (ETL) Expertise in SQL queries Experience working with.Net, WinForms, and WPF Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, and RBAC. Good to have: Knowledge of working with Entity Framework Experience working and deploying Azure web services (edited) Experience with Machine learning is a plus Exposure to SSIS is a plus
Pune
INR 12.0 - 15.0 Lacs P.A.
Remote
Full Time
Job Title: JAVA Developer Required Experience: 7+ years Job Overview: We are looking for a passionate Java developer with 7 years of experience to join our dynamic team. The ideal candidate should have a solid understanding of Java programming, experience with web frameworks, and a strong desire to develop efficient, scalable, and maintainable applications. Key Responsibilities: Design, develop, and maintain scalable and high-performance Java applications. Write clean, modular, and well-documented code that follows industry best practices. Collaborate with cross-functional teams to define, design, and implement new features. Debug, test, and troubleshoot applications across various platforms and environments. Participate in code reviews and contribute to the continuous improvement of development processes. Work with databases such as MySQL, and PostgreSQL to manage application data. Implement and maintain RESTful APIs for communication between services and front-end applications. Assist in optimizing application performance and scalability. Stay updated with emerging technologies and apply them in development projects when appropriate. Requirements: 2+ years of experience in Java development. Strong knowledge of Core Java, OOP concepts, struts, and Java SE/EE. Experience with Spring Framework (Spring Boot, Spring MVC) or Hibernate for developing web applications. Familiarity with RESTful APIs and web services. Proficiency in working with relational databases like MySQL or PostgreSQL. Familiarity with JavaScript, HTML5, and CSS3 for front-end integration. Basic knowledge of version control systems like Git. Experience with Agile/Scrum development methodologies. Understanding of unit testing frameworks such as JUnit or TestNG. Strong problem-solving and analytical skills. Experience with Kafka Experience with GCP Preferred Skills: Experience with DevOps tools like Docker, Kubernetes, or CI/CD pipelines. Familiarity with microservice architecture and containerization. Experience with NoSQL databases like MongoDB is a plus. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded cross-industry and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
Pune
INR 8.0 - 12.0 Lacs P.A.
Remote
Full Time
Job Title: R Shiny Engineer/Data Scientist Required Experience: 3-4 years Job Description: As the Data Scientist, you will play a pivotal role in driving data-driven decision-making and advancing our organization's AI and analytical capabilities. You will lead a team of data scientists, collaborate with cross-functional teams, and contribute to the development and implementation of AI and advanced analytics solutions. This position requires a strong combination of technical expertise, leadership skills, and business acumen . Responsibilities: Team Leadership: Lead, mentor, and inspire a team of junior data scientists, fostering a collaborative and innovative work environment. Provide technical guidance, set priorities, and ensure the team's alignment with organizational goals. Conduct regular performance assessments and contribute to professional development plans. Strategy and Planning: Collaborate with stakeholders to understand business objectives and identify opportunities for leveraging data to achieve strategic goals. Develop and execute a data science roadmap, ensuring alignment with overall business and technology strategies. Stay abreast of industry trends, emerging technologies, and best practices in data science. Advanced Analytics and Statistical Modeling: Design, develop, and implement advanced machine learning models and statistical algorithms to extract insights and solve complex business problems. Apply robust statistical process control (SPC), univariate and multivariate analysis, and both parametric and non-parametric statistical techniques. Conduct hypothesis testing, PCA, Shapiro-Wilk test, Anderson-Darling test, Box-Cox transformation, and other statistical methods to ensure data quality and model validity. Work extensively with batch genealogy data and large manufacturing datasets to uncover patterns and optimize operational efficiency. Ensure strong statistical analysis support for both normal and non-normal distributions R Shiny Application Development: Develop and maintain robust, interactive R Shiny applications to support dynamic data exploration and decision-making platforms. Build scalable and user-driven front-end interfaces for real-time statistical analysis and visualization. Collaborate with backend engineers to integrate R Shiny platforms with Redshift and other data sources for seamless analytics delivery. Cross-functional Collaboration: Collaborate with cross-functional teams, including business analysts, software engineers, and domain experts, to integrate data science solutions into business processes. Communicate complex analytical findings to non-technical stakeholders in a clear and actionable manner. Data Governance and Quality: Establish and enforce data governance standards to ensure the accuracy, reliability, and security of data used for analysis. Work with data engineering teams to enhance data quality and integrity throughout the data lifecycle. Project Management: Oversee the end-to-end execution of data science projects, ensuring timelines, budgets, and deliverables are met. Provide regular project updates to stakeholders and manage expectations effectively. Technical Expertise: Provide technical guidance and execution for the latest GenAI technologies, including but not limited to LLM/SLM/VLM and Multi-modal AI Algorithms. Leverage Transformers for complex natural language processing-based tasks. Lead the development of deep learning technologies like computer vision for image processing, OCR/IDP, object detection and tracking, segmentation, Image generation, Convolutional Neural Networks, Capsule Networks, etc Development of core Machine Learning algorithms like time series analysis with Neural ODEs; Variational Autoencoders for Image Generation and anomaly detection; Provide oversight for core deep learning algorithms like Neural Architecture Search for optimization and Graph Neural Networks for molecular structures. Qualifications: Master's or Ph.D. in a quantitative field (Computer Science, Statistics, Mathematics, etc.) Minimum 1.5+ years of experience leading a team of junior data scientists, with a proven track record of successful project implementations. Proven experience in developing and deploying R Shiny applications for real-time analytics and statistical platforms. In-depth experience with SPC, hypothesis testing, PCA, Shapiro-Wilk test, Anderson-Darling test, Box-Cox transformation, and batch genealogy analysis. Experience in developing statistical solutions for both normal and non-normal distributions, and applying both univariate and multivariate techniques. Experience with GenAI, Agentic AI, LLM Training, and LLM-driven workflow development. Knowledge of large multi-modal models is a must. Experience in MLOps, Statistical Modeling, and Data Visualization. Must have experience with the development and implementation of various core Machine Learning algorithms mentioned above. Must have hands-on experience with Deep Learning technologies for computer vision and image processing, as well as core neural network applications like optimization. Experience in developing ML, AI, and Data Science solutions and putting solutions in production, with proficiency in Data Engineering, is desirable. Experience in the development and implementation of scalable and efficient data pipelines using AWS services such as SageMaker, S3, Glue, and/or Redshift. Excellent leadership, communication, and interpersonal skills. Experience with big data technologies and cloud platforms is a plus. Industry IT Services and IT Consulting
Pune
INR 7.0 - 10.0 Lacs P.A.
Hybrid
Full Time
Job Title: Senior AI Engineer (Clinical Research) Required Experience: 3+ years Location: Pune Job Description: We are seeking a highly skilled and experienced Senior AI Engineer to lead the development and deployment of advanced artificial intelligence and machine learning solutions for clinical research initiatives. This role will drive innovation in clinical data extraction, patient cohort analysis, protocol optimization, and predictive modeling to support better decision-making and operational efficiencies in clinical trials. This position requires a strong combination of technical expertise, leadership skills, and business acumen . Key Responsibilities 1. AI/ML Development & Deployment Develop and deploy machine learning models for extracting, classifying, and analyzing structured and unstructured clinical data (e.g., EMRs, CRFs, protocols). Apply NLP techniques for entity recognition, relationship extraction, summarization, and semantic search of clinical trial texts. Build predictive models for trial success, patient recruitment, dropout risk, and adverse event detection. Lead the development of GenAI workflows, including LLM/SLM/VLM-based pipelines and multi-modal AI applications. Implement computer vision and OCR/IDP models for clinical document processing and imaging. 2. RAG (Retrieval-Augmented Generation) & LLM Engineering Design and optimize RAG workflows, including embedding and retrieval pipelines. Work with Gemini Flash 2.5 or similar models (e.g., GPT-4, Claude) for LLM-based clinical data tasks. Fine-tune prompt engineering, context window optimization, and chunking strategies. Integrate and manage vector stores such as FAISS, Chroma, or Vertex AI Matching Engine. 3. Data Engineering & Pipelines Collaborate with data engineers to build scalable pipelines for ingesting and transforming clinical and biomedical data (e.g., PubMed, CT.gov, EHR, imaging). Handle data cleansing, harmonization, annotation, and transformation for ML readiness. Develop ingestion processes for structured and unstructured sources: PDF, SCORM, HTML, internal documents. 4. Evaluation, Monitoring & Regulatory Compliance Evaluate models using domain-specific metrics (e.g., precision@k, response confidence) and build continuous feedback loops. Ensure transparency, explainability, and reproducibility in model design. Maintain compliance with HIPAA, GDPR, and GxP/ISO 13485 standards. 5. Leadership & Collaboration Lead and mentor a team of junior data scientists and ML engineers. Define and drive AI strategy aligned with clinical and regulatory objectives. Collaborate with clinical scientists, statisticians, product managers, and regulatory teams to translate clinical goals into technical AI solutions. 6. Project & Technical Ownership Own end-to-end execution of AI projects: scoping, design, development, deployment, monitoring. Provide technical direction in deep learning (CNNs, Capsule Networks, Variational Autoencoders, Graph Neural Networks). Ensure MLOps best practices, including CI/CD, monitoring, logging, and governance in cloud environments. Technical Skills & Experience Must-Have: Proficiency in Python (FastAPI/Flask), LangChain, HuggingFace, and vector database integrations. Experience with embedding techniques and vector similarity (cosine, dot product, top-k retrieval). Hands-on experience with GenAI models (e.g., Gemini Flash 2.5, GPT-4, Claude). Knowledge of biomedical ontologies (UMLS, SNOMED CT, MeSH) and EDC systems. Familiarity with CDISC, HL7 FHIR, and OMOP CDM standards for clinical data. Proven experience in deploying AI models on cloud platforms: AWS SageMaker, GCP Vertex AI, or Azure ML. Desirable: Exposure to MLOps, scientific ML, and data visualization tools. Experience with Google Cloud (AlloyDB, Vertex AI APIs, secure deployments). Working knowledge of LMS content ingestion (PDFs, SCORM, HTML). Familiarity with frameworks like LlamaIndex or Haystack. Qualifications Bachelors or Masters in Computer Science, Biomedical Engineering, Data Science, or related field. PhD preferred. 3+ years of AI/ML experience, with 1+ years in clinical research or healthcare. Strong leadership, communication, and stakeholder management skills. Demonstrated success in translating complex clinical problems into deployable AI solutions.
Pune
INR 7.0 - 10.0 Lacs P.A.
Remote
Full Time
Designation: WordPress Developer Preferred Experience: 4+ Years Position Summary: We're managing a Composer-based WordPress CMS (inspired by Bedrock) that powers a headless architecture. The system uses modern PHP practices, WPGraphQL, Timber, and custom plugin/theme management via Composer. You will work with an experienced team maintaining and extending a multilingual, API-driven backend for a decoupled frontend application. Responsibilities: Maintain and extend a Composer-managed WordPress backend. Work with WPGraphQL to expose content and structure to a headless frontend. Manage custom themes and plugins using Composer, including premium packages. Ensure multilingual support via Polylang and WPGraphQL Polylang extensions. Implement and troubleshoot advanced custom fields (ACF Pro) and GraphQL integration. Develop and optimize backend APIs used by frontend teams. Collaborate with DevOps or use WP-CLI and other CLI tools for deployments. Write clean, modular, and testable PHP code; write or maintain unit tests with PHPUnit. Must-Have Skills: Strong experience with Composer-based WordPress workflows. Deep understanding of WordPress as a headless CMS. Solid PHP (8.x) experience and object-oriented programming. Experience with WPGraphQL, ACF Pro, and Timber. Knowledge of Polylang and multilingual WordPress setups. Familiarity with CLI tools: WP-CLI, Composer, Git. Experience managing plugins/themes as Composer packages. Experience integrating third-party APIs (e.g., Guzzle, HTTP Foundation). Nice-to-Have Skills: Familiarity with Roots stack (Bedrock, Sage). Basic knowledge of frontend frameworks (React, Next.js) or RESTful APIs. Docker or local dev tools like LocalWP or XAMPP. Understanding of CI/CD and deployment workflows. Knowledge of best practices in caching, performance, and security. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded, cross-industry, and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
FIND ON MAP
Company Reviews
View ReviewsBrowse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.