Jobs
Interviews

1203 Normalization Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

India

On-site

About us Founded in 2008, CitNOW is an innovative, enterprise-level software product suite that allows automotive dealerships globally to sell more vehicles and parts more profitably. CitNOW’s app-based platform provides a secure, brand-compliant solution – for dealers to build trust, transparency and long-lasting relationships. CitNOW Group was formed in 2021 to unite a portfolio of 12 global software companies leveraging innovation to aid retailers and manufacturers in delivering an outstanding customer experience. We have over 300 employees worldwide who all contribute to our vision to provide market-leading automotive solutions to drive efficiencies, seamlessly transforming every customer moment. The CitNOW Group is no ordinary technology company, we live a series of One Team values and this guiding principle forms the foundation of CitNOW Group’s award winning, collaborative and inclusive culture. Recognised recently within the Top 25 Best Mid Sized Companies to work for within the UK, we pride ourselves on being a great place to work. About the role We are seeking a highly experienced Senior Database Administrator to own the performance, availability and security of our growing fleet of databases across MSSQL, PostgreSQL and AWS-managed services. This individual will play a key role in designing scalable data architectures, ensuring high availability and supporting development teams with optimised queries and resilient data pipelines. Key responsibilities: Database Administration & Operations Maintain, monitor, and tune production and staging MSSQL and PostgreSQL databases Manage high availability, backups, restores, replication, and disaster recovery strategies Ensure uptime and performance SLAs are met across cloud and hybrid environments AWS Cloud Expertise Administer RDS, Aurora, and EC2-hosted database instances. Monitor database performance using CloudWatch, Performance Insights, and third-party tools Data Architecture & Design Work with engineering teams to model new schemas, optimize indexes, and review queries Implement and enforce best practices in database normalization, partitioning, and data lifecycle management Security & Compliance Ensure data encryption, access controls, and audit logging are in place and compliant with company policies Support GDPR/SOC 2/ISO 27001 initiatives with appropriate database controls and evidence collection Collaboration & Mentoring Provide database guidance to developers and DevOps teams during code reviews and deployments Share knowledge, mentor junior DBAs, and improve documentation and internal tooling Required skills & experience 10+ years of DBA experience, including at least: 5+ years with MSSQL Server (SQL Agent, SSIS, performance tuning) 5+ years with PostgreSQL (query optimization, extensions, logical replication) Strong hands-on experience with AWS database services: RDS, Aurora, S3, and IAM integration Solid understanding of SQL query optimization, execution plans, and troubleshooting slow queries Proven track record of managing production-grade environments with strict uptime SLAs Familiarity with infrastructure-as-code and automation using tools like Terraform or Ansible Nice-to-have qualifications: Experience with NoSQL databases (e.g., DynamoDB, Redis, Mongo) Exposure to CI/CD pipelines with database change management tools (Flyway, Liquibase) Knowledge of containerized database deployments (e.g., PostgreSQL on Kubernetes) Certification in AWS (e.g., AWS Certified Database – Specialty) In addition to a competitive salary, our benefits package is second to none. Employee wellbeing is at the heart of our people strategy, with a number of innovative wellness initiatives such as flexi-time, where employees can vary their start and finish times within our core business hours and/or extend their lunch break by up to 2 hours per day. Employees also benefit from an additional two half days paid leave per year to focus on their personal wellbeing. We recognise the development of our people is vital to the ongoing success of the business and proudly promote a culture of continuous learning and improvement, along with opportunities to develop and progress a successful career with us. The CitNOW Group is an equal opportunities employer that celebrates diversity across our international teams. We are passionate about creating an inclusive workplace where everyone’s individuality is valued. View our candidate privacy policy here - CitNOW-Group-Candidate-Privacy-Policy.pdf (citnowgroup.com)

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the application development process - Coordinate with stakeholders to gather requirements - Ensure timely delivery of projects Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 5 years of experience in Data Analytics - This position is based at our Gurugram office - A 15 years full-time education is required 15 years full time education

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Company Description SoftSensor.ai LLC is a data-&-AI innovation studio twice recognized in the DataIQ 100 for Most Innovative Solutions and Best Team to Work With (Vendor Side) . Alongside bespoke AI consulting, we operate PRR.AI —our flagship Medical AI platform that combines multimodal AI models with a human-in-the-loop workflow for high-volume medical and life-science data. PRR.AI powers: Digital Pathology & Radiology – whole-slide or DICOM image ingestion, smart region-of-interest detection, stain normalization, and AI-assisted tumor grading. Clinical Document Validation – OCR + LLM pipelines for batch-manufacturing records (BMRs), clinical notes, and regulatory dossiers. Multimodal Research Datasets – embryo-quality prediction, IVF outcome modeling, and vision-language damage detection for medical devices & vehicles. SoftSensor + PRR.AI give clinicians, data scientists, and regulators a shared workspace where AI predictions are auditable, explainable, and continuously improved. Our culture blends clinical depth , data-science rigor , and product craftsmanship with flexible remote work hubs in Bengaluru, Hyderabad, and Gurgaon. Role Overview You will be a clinical domain expert inside our AI product pipeline, splitting time between AI consulting projects and PRR.AI platform operations. Key Responsibilities Clinical Data Curation Use the PRR.AI interface to label or validate entities across slides, radiographs, lab reports, and BMR pages. Define new ontologies and QA rules that drive PRR.AI’s auto-validation engine. Model Evaluation & Feedback Loops Run test sets through vision-language models (e.g., LLaMA Vision, GPT-4o) and document false-positives/negatives inside PRR.AI’s error-tracking boards. Recommend prompt tweaks and fine-tuning strategies to ML engineers. Knowledge Engineering & Prompt Design Translate clinical SOPs and regulatory guidelines into structured prompts or rule-based checks used by PRR.AI’s “Guardrails” module to flag contraindications or process deviations. Stakeholder Collaboration Act as clinical liaison for pharma clients adopting PRR.AI in GxP settings; conduct onboarding webinars and draft medical validation reports. Partner with product & UX teams to refine the PRR.AI reviewer workflow (hot-keys, bulk actions, keyboard navigation) and minimize cognitive load. Regulatory & Quality Assurance Maintain audit-ready documentation (dataset lineage, annotation protocols, validation metrics) to support ISO 13485 / GAMP 5 / CDSCO-NDCT & FDA 21 CFR Part 11 compliance. Minimum Qualifications MBBS from an NMC-recognized institution. MBBS + MBA is a bonus point 0–3 years internship, clinical practice, or health-informatics experience. Strong grasp of medical terminology, ICD-10/11, drug classes, basic biostatistics. Clear English communication and ability to explain clinical nuances to non-medical peers. Awareness of HIPAA, GDPR, DPDP India, and data-integrity principles (ALCOA+). What We Offer Competitive pay (fixed salary for FTE; hourly rates for part-time) + Variable Comp Group medical insurance Sponsored upskilling—Coursera “AI in Healthcare”, Azure Health Data Services, etc. Direct impact on products used by pathologists, IVF specialists, and pharma QA teams worldwide. How to Apply Email your CV (PDF) and a short note answering: A clinical workflow you believe AI + human review (like PRR.AI) could transform. A dataset or paper that excites you and why. Send to hr@softsensor.ai with the subject: “Medical AI Analyst – [Full-time/Part-time] – Your Name” Rolling interviews begin 15 July 2025 . Positions remain open until filled.

Posted 1 month ago

Apply

5.0 years

6 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 1 Opening Trivandrum Role description Overview: We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities: Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications: Bachelor’s degree in Computer Science, Information Security, or related field. 5+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills: SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Job location: Thiruvananthpuram Notice period: Immediate Skills Siem,Splunk,Troubleshooting About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 month ago

Apply

5.0 years

3 - 7 Lacs

Gurgaon

On-site

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. At AHEAD, we prioritize creating a culture of belonging, where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer, and do not discriminate based on an individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, marital status, or any other protected characteristic under applicable law, whether actual or perceived. We embrace all candidates that will contribute to the diversification and enrichment of ideas and perspectives at AHEAD. Key Responsibilities Integration Development : Design, develop, and deploy n8n workflows (and custom JavaScript functions) to connect to vendor, customer, and OEM APIs and data sources, retrieving information such as device health, license entitlements, warranty status, and more. Workflow Maintenance & Optimization: Monitor existing integrations for errors or performance bottlenecks; troubleshoot failures, optimize for reliability and efficiency, and ensure appropriate error-handling and retry logic. API Design & Documentation : Collaborate with internal stakeholders to define data schemas and API contracts; write clear API specifications and maintain up-to-date documentation of integration endpoints. Automation Strategy: Contribute to Hatch’s overall automation strategy by evaluating new iPaaS platforms/technologies, identifying opportunities to expand or consolidate integration workflows, and recommending best practices around version control, testing, and deployment pipelines. Collaboration & Requirements Gathering: Partner with Leads, Customer Success, and Engineers to gather detailed integration requirements, map out data flows, and ensure alignment between technical design and business goals. Code Review & Standards: Participate in regular code and workflow reviews; uphold internal coding and security standards (e.g. handling of API keys, secrets, compliance with data privacy policies). Monitoring & Alerting: Implement monitoring dashboards and alerts (e.g., within n8n or via external monitoring tools) to proactively catch integration failures, stale data, or SLA breaches. Support & Troubleshooting: Serve as primary escalation point for integration-related incidents; collaborate with Engineers to diagnose and resolve complex issues in production. Skills Required Education and Experience Minimum Required – 5+ years of experience in software development or systems integration, with at least 3+ year of hands-on experience building production integrations using an iPaaS or workflow automation platform (e.g., n8n, Zapier, Workato, MuleSoft). Preferred – 6+ years of experience in software development or integration engineering; demonstrated experience connecting complex enterprise systems (e.g., vendor APIs, entitlement/licensing databases, CMDBs) Knowledge, Skills, Abilities Programming Languages and Technologies: Proficient in JavaScript (ES6+) and comfortable writing custom code inside n8n or other automation tools. Familiarity with RESTful APIs, Webhooks, OAuth/OAuth2, and GraphQL. Experience with iPaaS/workflow platforms—especially n8n or other iPaaS platforms. Integration Concepts: Understanding of data retrieval (pull/push), transformation (mapping, normalization), and secure authentication methods. Knowledge of IT and infrastructure devices and equipment is a plus. Problem-Solving: Excellent analytical and problem-solving skills. Project Management: Ability to manage project tasks, set milestones, and meet deadlines. Collaboration: Strong teamwork and interpersonal skills, with the ability to work directly with end users and cross-functional teams. Communication: Excellent verbal and written communication skills. Attention to Detail: High attention to detail and commitment to quality. Code Review: Experience participating in and conducting code reviews. Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include: Medical, Dental, and Vision Insurance 401(k) Paid company holidays Paid time off Paid parental and caregiver leave Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (“OTE”) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidate’s relevant experience, qualifications, and geographic location.

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, adding attributes, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree (Any Graduate) Good Understanding of tools and technology. Intermediate knowledge of MS Office/Internet. About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.

Posted 1 month ago

Apply

2.0 years

3 - 4 Lacs

Noida

On-site

Position: Web Developer We are looking for a highly skilled Web Developer with 2+ years of experience in web-based project development. The successful candidate will be responsible for designing, developing, and implementing web applications using PHP and various open-source frameworks. Key Responsibilities: Collaborate with cross-functional teams to identify and prioritize project requirements Develop and maintain high-quality, efficient, and well-documented code Troubleshoot and resolve technical issues Implement Social Networks Integration, Payment Gateways Integration, and Web 2.0 in web-based projects Work with RDBMS design, normalization, Data modelling, Transactions, and distributed databases Develop and maintain database PL/SQL, stored procedures, and triggers Requirements: 2+ years of experience in web-based project development using PHP Experience with various open-source frameworks such as Laravel, WordPress, Drupal, Joomla, OsCommerce, OpenCart, TomatoCart, VirtueMart, Magento, Yii 2, CakePHP 2.6, Zend 1.10, and Kohana Strong knowledge of Object-Oriented PHP, Curl, Ajax, Prototype.Js, JQuery, Web services, Design Patterns, MVC architecture, and Object-Oriented Methodologies Experience with RDBMS design, normalization, Data modelling, Transactions, and distributed databases Well-versed with RDBMS MySQL (can work with other SQL flavors too) Job Type: Full-time Pay: ₹25,000.00 - ₹35,000.00 per month Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, adding attributes, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree (Any Graduate) Good Understanding of tools and technology. Intermediate knowledge of MS Office/Internet. About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

What you’ll do: Manage and maintain PostgreSQL databases in development, staging, and production environments. Write and optimize SQL queries, stored procedures, functions, and triggers to support application logic. Design, implement, and maintain logical and physical database schemas. Monitor database performance and implement performance tuning strategies. Ensure data integrity, security, and availability through regular maintenance and backups. Collaborate with application developers to understand requirements and provide efficient database solutions. Handle database migrations, versioning, and deployment as part of CI/CD pipelines. Perform regular database health checks, index analysis, and query optimization. Troubleshoot and resolve database issues, including slow queries, locking, and replication errors. What we seek in you: 3 to 5 years proven experience as a PostgreSQL Database with hands-on SQL development experience. Strong knowledge of PL/SQL and writing efficient stored procedures and functions. Experience with database schema design, normalization, and data modeling. Solid understanding of PostgreSQL internals, indexing strategies, and performance tuning. Experience with backup and recovery tools, pg_dump, pg_restore, replication, and monitoring tools. Proficient in Linux/Unix command-line tools for database management. Familiar with version control systems (e.g., Git) and CI/CD practices. Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role: We are seeking a highly experienced Voice AI /ML Engineer to lead the design and deployment of real-time voice intelligence systems . This role focuses on ASR , TTS , speaker diarization , wake word detection , and building production-grade modular audio processing pipelines to power next-generation contact center solutions , intelligent voice agents , and telecom-grade audio systems . You will work at the intersection of deep learning , streaming infrastructure , and speech/NLP technology , creating scalable, low-latency systems across diverse audio formats and real-world applications. Key Responsibilities: Voice & Audio Intelligence: Build, fine-tune, and deploy ASR models (e.g., Whisper , wav2vec2.0 , Conformer ) for real-time transcription. Develop and finetune high-quality TTS systems using VITS , Tacotron , FastSpeech for lifelike voice generation and cloning. Implement speaker diarization for segmenting and identifying speakers in multi-party conversations using embeddings (x-vectors/d-vectors) and clustering (AHC, VBx, spectral clustering). Design robust wake word detection models with ultra-low latency and high accuracy in noisy conditions. Real-Time Audio Streaming & Voice Agent Infrastructure: Architect bi-directional real-time audio streaming pipelines using WebSocket , gRPC , Twilio Media Streams , or WebRTC . Integrate voice AI models into live voice agent solutions , IVR automation , and AI contact center platforms . Optimize for latency , concurrency , and continuous audio streaming with context buffering and voice activity detection (VAD). Build scalable microservices to process, decode, encode, and stream audio across common codecs (e.g., PCM , Opus , μ-law , AAC , MP3 ) and containers (e.g., WAV , MP4 ). Deep Learning & NLP Architecture: Utilize transformers , encoder-decoder models , GANs , VAEs , and diffusion models , for speech and language tasks. Implement end-to-end pipelines including text normalization, G2P mapping, NLP intent extraction, and emotion/prosody control. Fine-tune pre-trained language models for integration with voice-based user interfaces. Modular System Development: Build reusable, plug-and-play modules for ASR , TTS , diarization , codecs , streaming inference , and data augmentation . Design APIs and interfaces for orchestrating voice tasks across multi-stage pipelines with format conversions and buffering. Develop performance benchmarks and optimize for CPU/GPU, memory footprint, and real-time constraints. Engineering & Deployment: Writing robust, modular, and efficient Python code Experience with Docker , Kubernetes , cloud deployment (AWS, Azure, GCP) Optimize models for real-time inference using ONNX , TorchScript , and CUDA , including quantization , context-aware inference , model caching . On device voice model deployment. Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will be developing applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop applications and systems utilizing AI tools and Cloud AI services. - Implement proper cloud or on-prem application pipelines with production-ready quality. - Apply GenAI models as part of the solution. - Utilize deep learning and neural networks in projects. - Create chatbots and work on image processing tasks. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Machine Learning. - This position is based at our Bengaluru office. - A 15 years full-time education is required.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Minimum Requirements 5+ years of experience in Data Engineering, including more than 5 years of hands-on expertise with Databricks, AWS EMR, Redshift, and various database management systems. Advanced SQL RDBMS design and query building skills (Oracle, SQL Server, Databricks, Redshift etc.) Proficient in programming languages like Python and PySpark. Experience in data normalization, data modelling, Decoupled ETL, SQL modelling. Experience profiling, manipulating, and merging massive data set using Big Data technologies, preferably Databricks & AWS Analytics. Exposure to Unix or other shell scripting, job scheduling using Control-M or equivalent tools Attention to detail and logical based thinking in building, testing, and reviewing data pipelines containing multi-disciplinary information Commitment to building using established coding and naming standards Experience in guiding ad-hoc technical solutions to the team Preferred Languages: Python, PySpark, SparkSQL, SQL, JAVA Exposure to SAP ERP system, Salesforce.com, etc. is preferred Visualization tool experience, especially with Tableau or Power BI Working in larger organizations with established individual responsibilities & teams. We Value Databricks & AWS Cloud Certifications Experience in business domains – supply chain, pricing, sales & marketing Experience in working with data scientists Strong problem solver who can invent new techniques and approaches if necessary Ability to work in a fast-paced environment

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Bengaluru, Karnataka, India

Remote

Job Summary: Maimsd Technology is seeking a skilled SQL Developer with expertise in SSIS and SSRS . As a Software Engineer / Senior Software Engineer (Database), you will play a pivotal role in designing, developing, and maintaining the database infrastructure for our core product. You will collaborate closely with the development team to ensure that our database solutions are scalable, efficient, and aligned with our business objectives, contributing significantly to both data integration and analytics. Key Responsibilities: Database Design and Development: Develop and implement robust database models, views, tables, stored procedures, and functions to support product development. Design and maintain efficient SSIS packages, T-SQL scripts, and SQL jobs . Optimize database performance through advanced query tuning, indexing (including column store index), and partitioning strategies. Data Integration: Develop complex stored procedures for loading data into staging tables from various sources. Ensure data integrity and consistency across different systems. Data Analytics: Collaborate with data analysts to design and implement data analytics solutions using tools like SQL Server, SSIS, SSRS , and Excel Power Pivot/View/Map. Documentation: Document complex processes, business requirements, and specifications thoroughly. Database Administration (Supportive Role): Provide authentication and authorization for database access. Develop and enforce best practices for database design and development. Manage database migration activities. Required Skills: Technical Skills: Strong proficiency in MS SQL Server , including advanced skills in query tuning, stored procedures, functions, views, triggers, indexes (specifically column store index), SQL server column storage, and analyzing query execution plans. Extensive experience with database design, normalization, and performance optimization . Solid knowledge of data warehousing and ETL processes . Hands-on experience with SSIS (SQL Server Integration Services), SSRS (SQL Server Reporting Services) , and Excel Power Pivot/View/Map. Soft Skills: Excellent analytical, problem-solving, and communication skills. Ability to work independently and as part of a collaborative team. Strong attention to detail and a commitment to quality. Benefits: Competitive salary and benefits package. Opportunities for professional growth and development. Remote work flexibility.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About This Role Aladdin Data is at the heart of Aladdin and increasingly the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. The DOE team is responsible for the data ecosystem within BlackRock. Our goal is to build and maintain a leading-edge data platform that provides highly available, consistent data of the highest quality for all users of the platform, notably investors, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to the firm, powering the future growth of Aladdin. Data Pipeline Engineers at BlackRock get to experience working at one of the most recognized financial companies in the world while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and analytics experts to deliver high quality analytical and derived data to our consumers. We are looking for data engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. We are committed to open source and we regularly give our work back to the community. Engineers looking to work in the areas of orchestration, data modeling, data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities Data Pipeline Engineers are expected to be involved from inception of projects, understand requirements, architect, develop, deploy, and maintain data pipelines (ETL / ELT). Typically, they work in a multi-disciplinary squad (we follow Agile!) which involves partnering with program and product managers to expand product offering based on business demands. Design is an iterative process, whether for UX, services or infrastructure. Our goal is to drive up user engagement and adoption of the platform while constantly working towards modernizing and improving platform performance and scalability. Deployment and maintenance require close interaction with various teams. This requires maintaining a positive and collaborative working relationship with teams within DOE as well as with wider Aladdin developer community. Production support for applications is usually required for issues that cannot be resolved by operations team. Creative and inventive problem-solving skills for reduced turnaround times are highly valued. Preparing user documentation to maintain both development and operations continuity is integral to the role. And Ideal candidate would have At least 4+ years’ experience as a data engineer Experience in SQL, Sybase, Linux is a must Experience coding in two of these languages for server side/data processing is required Java, Python, C++ 2+ years experience using modern data stack (spark, snowflake, Big Query etc.) on cloud platforms (Azure, GCP, AWS) Experience building ETL/ELT pipelines for complex data engineering projects (using Airflow, dbt, Great Expectations would be a plus) Experience with Database Modeling, Normalization techniques Experience with object-oriented design patterns Experience with dev ops tools like Git, Maven, Jenkins, Gitlab CI, Azure DevOps Experience with Agile development concepts and related tools Ability to trouble shoot and fix performance issues across the codebase and database queries Excellent written and verbal communication skills Ability to operate in a fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances BA/BS or equivalent practical experience Skills That Would Be a Plus Perl, ETL tools (Informatica, Talend, dbt etc.) Experience with Snowflake or other Cloud Data warehousing products Exposure with Workflow management tools such as Airflow Exposure to messaging platforms such as Kafka Exposure to NoSQL platforms such as Cassandra, MongoDB Building and Delivering REST APIs Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Title: Software Architect Location: Indore, Madhya Pradesh (On-site / Hybrid) Experience: 8+ years Department: Technology / Engineering Employment Type: Full-Time About the Role: We are looking for an experienced and hands-on Software Architect to lead the redesign and optimization of our core systems at Alveofit. The ideal candidate will have deep expertise in system performance tuning, API architecture, and database schema design. You will work closely with cross-functional teams to ensure our technology infrastructure is scalable, efficient, and future-ready. Key Responsibilities: System Optimization: Analyze current architecture, identify performance bottlenecks, and implement system-wide improvements. Database Design: Redesign existing database schema to improve efficiency, scalability, and query performance. Ensure normalization, indexing, and partitioning are optimized for workload. API Architecture: Redesign and standardize RESTful APIs for internal and external integrations with a focus on speed, security, and maintainability. Tech Leadership: Provide architectural direction and mentorship to the development team. Review code and design documents. Collaboration: Work closely with DevOps, Product Management, QA, and other engineering teams to align architecture with business goals. Documentation: Maintain high-quality documentation for architecture decisions, API schemas, and database designs. Required Skills & Experience: Proven experience (8+ years) in software architecture and backend development. Strong knowledge of modern backend technologies (Node.js, Python, or Java preferred). Expertise in relational databases (PostgreSQL, MySQL, or equivalent); experience in NoSQL is a plus. Solid understanding of RESTful API design and tools like Swagger/OpenAPI. Experience with microservices, message queues (RabbitMQ, Kafka), and containerized environments (Docker, Kubernetes). Strong problem-solving and analytical skills, with a focus on performance optimization and refactoring legacy systems. Familiarity with CI/CD pipelines, version control (Git), and cloud platforms (AWS/Azure/GCP). Preferred Qualifications: Experience working in healthcare or medical device platforms. Prior experience migrating monolithic systems to microservices. Working knowledge of security and compliance standards (HIPAA, ISO 13485, etc.) What We Offer: Opportunity to work on meaningful health-tech products with real-world impact. Competitive compensation and performance bonuses. A collaborative and innovative work environment in the heart of Indore. Flexible work culture with opportunities for career growth.

Posted 1 month ago

Apply

7.5 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Data Engineering Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of AI/ML models. - Conduct research on emerging AI technologies. - Optimize AI algorithms for performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in Data Engineering. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education

Posted 1 month ago

Apply

7.5 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Salesforce Einstein AI Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with a proper cloud or on-prem application pipeline of production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of AI/ML models. - Conduct research on emerging AI technologies. - Optimize AI algorithms for performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Salesforce Einstein AI. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 7.5 years of experience in Salesforce Einstein AI. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education

Posted 1 month ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Overview Role description We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications Bachelor’s degree in Computer Science, Information Security, or related field. 5+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Job location: Thiruvananthpuram Notice period: Immediate Skills Siem,Splunk,Troubleshooting

Posted 1 month ago

Apply

0 years

15 - 32 Lacs

Hyderabad, Telangana, India

Remote

ServiceNow ITSM Developer Industry: Enterprise IT services and digital transformation We deliver cloud powered IT service management platforms that streamline operations, improve service quality, and unlock data driven insights for global clients. This hybrid role based in India offers the chance to build and extend ServiceNow solutions that keep mission critical environments running smoothly. Role & Responsibilities Design, configure, and script ServiceNow ITSM modules including Incident, Problem, Change, and Catalog. Develop client scripts, UI policies, business rules, and integrations to automate workflows and reduce manual effort. Implement and maintain REST and SOAP integrations with external systems using IntegrationHub and custom APIs. Optimize CMDB data models, discovery patterns, and service mapping for accurate asset visibility. Collaborate with product owners to translate user stories into secure, scalable platform features. Enforce platform governance, code reviews, and best practices to ensure performance and compliance. Skills & Qualifications Must Have 3 plus years hands on ServiceNow development across core ITSM modules. Proficiency in JavaScript, Glide API, and Service Portal widgets. Experience building REST or SOAP based integrations and troubleshooting MID Server connectivity. Solid understanding of CMDB, data normalization, and configuration item relationships. Agile delivery mindset with Git based version control and CI/CD exposure. Preferred Certified Implementation Specialist ITSM or Application Developer. Knowledge of ITOM, Discovery, Event Management, or SecOps. Experience with Performance Analytics and Reporting. Benefits & Culture Highlights Hybrid work model allowing flexibility between office and remote. Accelerated learning path with certification sponsorship and internal hackathons. Inclusive, outcome focused culture that rewards innovation and ownership. Skills: workflow design,mid server connectivity,javascript,rest integrations,ci/cd,service portal widgets,glide api,itsm,soap integrations,agile,servicenow,data normalization,git,cmdb

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. At AHEAD, we prioritize creating a culture of belonging, where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer, and do not discriminate based on an individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, marital status, or any other protected characteristic under applicable law, whether actual or perceived. We embrace all candidates that will contribute to the diversification and enrichment of ideas and perspectives at AHEAD. Key Responsibilities Integration Development: Design, develop, and deploy n8n workflows (and custom JavaScript functions) to connect to vendor, customer, and OEM APIs and data sources, retrieving information such as device health, license entitlements, warranty status, and more. Workflow Maintenance & Optimization: Monitor existing integrations for errors or performance bottlenecks; troubleshoot failures, optimize for reliability and efficiency, and ensure appropriate error-handling and retry logic. API Design & Documentation: Collaborate with internal stakeholders to define data schemas and API contracts; write clear API specifications and maintain up-to-date documentation of integration endpoints. Automation Strategy: Contribute to Hatch’s overall automation strategy by evaluating new iPaaS platforms/technologies, identifying opportunities to expand or consolidate integration workflows, and recommending best practices around version control, testing, and deployment pipelines. Collaboration & Requirements Gathering: Partner with Leads, Customer Success, and Engineers to gather detailed integration requirements, map out data flows, and ensure alignment between technical design and business goals. Code Review & Standards: Participate in regular code and workflow reviews; uphold internal coding and security standards (e.g. handling of API keys, secrets, compliance with data privacy policies). Monitoring & Alerting: Implement monitoring dashboards and alerts (e.g., within n8n or via external monitoring tools) to proactively catch integration failures, stale data, or SLA breaches. Support & Troubleshooting: Serve as primary escalation point for integration-related incidents; collaborate with Engineers to diagnose and resolve complex issues in production. Skills Required Education and Experience Minimum Required – 5+ years of experience in software development or systems integration, with at least 3+ year of hands-on experience building production integrations using an iPaaS or workflow automation platform (e.g., n8n, Zapier, Workato, MuleSoft). Preferred – 6+ years of experience in software development or integration engineering; demonstrated experience connecting complex enterprise systems (e.g., vendor APIs, entitlement/licensing databases, CMDBs) Knowledge, Skills, Abilities Programming Languages and Technologies: Proficient in JavaScript (ES6+) and comfortable writing custom code inside n8n or other automation tools. Familiarity with RESTful APIs, Webhooks, OAuth/OAuth2, and GraphQL. Experience with iPaaS/workflow platforms—especially n8n or other iPaaS platforms. Integration Concepts: Understanding of data retrieval (pull/push), transformation (mapping, normalization), and secure authentication methods. Knowledge of IT and infrastructure devices and equipment is a plus. Problem-Solving: Excellent analytical and problem-solving skills. Project Management: Ability to manage project tasks, set milestones, and meet deadlines. Collaboration: Strong teamwork and interpersonal skills, with the ability to work directly with end users and cross-functional teams. Communication: Excellent verbal and written communication skills. Attention to Detail: High attention to detail and commitment to quality. Code Review: Experience participating in and conducting code reviews. Why AHEAD Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits Include Medical, Dental, and Vision Insurance 401(k) Paid company holidays Paid time off Paid parental and caregiver leave Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (“OTE”) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidate’s relevant experience, qualifications, and geographic location.

Posted 1 month ago

Apply

0 years

5 - 8 Lacs

Bengaluru

On-site

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world. At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our 39,000 employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world The electronic Clinical Outcomes Assessment (eCOA) Build Programmer is responsible for designing, programming and testing clinical trial data collection databases, including the mapping, testing and normalization of data into a clinical data warehouse. This requires an in depth understanding of data technology, data flow, data standards, database programming, normalization and testing. This role will collaborate with Data and Analytics colleagues such as the Clinical Data Associate, Clinical Data Manager, Statistician and other key partners to deliver standardized data collection methods and innovative validation solutions for use in global clinical trials. Responsibilities: This job description is intended to provide a general overview of the job requirements at the time it was prepared. The job requirements of any position may change over time and may include additional responsibilities not specifically described in the job description. Consult with your supervision regarding your actual job responsibilities and any related duties that may be required for the position. Portfolio Delivery: Gather and influence eCOA design specifications to enable successful trial implementation Program and test data collection systems and associated data repository mappings for a trial or set of trials within a program using data standards library components Ensure data collection systems and data warehouse mappings are delivered accurately, efficiently and in alignment with study objectives Partner with translation vendors to implement localized data collection Partner with Clinical Build Programmer to ensure complete data build for trial data collection needs Provide insights into study level deliverables (i.e., Data Management Plan, Project Plan, database, and observed datasets) Support submission, inspection and regulatory response activities Lead cross Business Unit/Therapeutic Area projects or programs with high complexity Opportunity to develop and tests new ideas and/or applies innovative solutions that add new value to the portfolio Project Management: Increase speed, accuracy, and consistency in the development of systems solutions Enable metrics reporting of study development timelines and pre and postproduction changes to database Partner to deliver study database per business need and before first patient visit Follow and influence data standard decisions and strategies for a study and/or program Utilize therapeutic knowledge and possess a deep understanding of the technology used to collect clinical trial data Effectively apply knowledge of applicable internal, external and regulatory requirements/expectations (MQA, CSQ, MHRA, FDA, ICH, GCP, PhRMA, Privacy knowledge, etc.) to study build deliverables Integrates multi-functional and/or external information and applies technical knowledge to data-driven decision making. Enterprise Leadership: Continually seek and implement means of improving processes to reduce study build cycle time, decrease work effort and enable the normalization of various sources of data into a common data repository in a way that allows for improved integration, consumption and downstream analysis Represent Data and Analytics processes in multi-functional initiatives Actively engage in shared learning across Data and Analytics organization Work to Increase re-usability of screens and edits by improving the initial design Work to reduce postproduction changes change control process Anticipate and resolve key technical, operational or business problems that impact the Data and Analytics organization Interacts with regulators, business partners and outside stakeholders on business issues Thinks with end to end in mind consistently managing risk to minimize impact on delivery Builds a diverse multi-functional and internal/external network to understand how different disciplines and approaches contribute to research and development Focuses on defining database solutions and timelines in support of advancing the portfolio Basic Qualifications & Requirements: Bachelor’s or Master’s degree in computer science, engineering, medical field, Informatics, Life Sciences, Statistics, Information Technology, with hands-on experience in database programming, or a combination of clinical data management, system validation and data analysis, experience in the clinical, pharmaceutical, biotech, CRO or regulatory agency sectors. eCOA (electronic Clinical Outcome Assessment) or eDC (electronic data capture), eSource or Direct Data Capture implementation Design of electronic CRF screens to capture Clinical data and build on required validations Understanding of Clinical Protocol and interpretation of the Clinical terminologies to create or design specifications Familiarity with Clinical data tools, technologies, workflow for collecting patient data, testing and validation of system Analyse the impact and implement post production changes the Study design Data analytics and visualizations Articulating the flow of data (structure and format) from patient to analysis and apply this knowledge to data solutions Deciding the technology platform (system/database) for data acquisition and aggregation Utilization of clinical/drug development knowledge and an ability to liaise effectively with study team members (i.e., Data Sciences, Statistics, PK, Operations, Medical) Strong therapeutic/scientific knowledge in the field of research Project management experience Vendor management experience Understanding and experience in using data standards Knowledge of medical terminology Ability to balance multiple activities, prioritize and manage ambiguity Demonstrated exemplary teamwork/interpersonal skills Proven problem solving, attention to detail and result oriented behaviors in a fast-paced environment. Additional Preferences: Programming experience with HTML, CSS, JAVASCRIPT, Node.js, JSON. Experience with relational and non-relational database technologies utilizing SQL, Data Functions and Procedures. (e.g., MongoDB) Technical proficiency with HTML, Data Mapping, understanding of database structures, etc. Experience handling GITLAB utilities Quick learner to new trends in technology Excellent leadership, communication (written and oral) and interpersonal skills Demonstrated leadership in professional setting Demonstrated teamwork and collaboration in a professional setting Other Information: Domestic and International travel may be required Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality for Electrical, Mechanical & Electronics products. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet.

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Rajkot, Gujarat, India

Remote

Job Summary Responsible for migrating user base from OnPrem to MS-Teams and Tenant to Tenant Migrations. 8 to 10 years experience on Skype Microsoft Teams with Enterprise Voice is must. Experience on AudioCodes/Sonus Gateways/Downstream SBCs Upgrading installing configuring designing and migrating Voice from PBX to MS-Teams Experience and Knowledge on MS Teams Direct routing and Operator Connect for Enterprise Voice Responsibilities Responsible for migrating user base from OnPrem to MS-Teams and Tenant to Tenant Migrations. 8 to 10 years experience on Skype Microsoft Teams with Enterprise Voice is must. Experience on AudioCodes/Sonus Gateways/Downstream SBCs Upgrading installing configuring designing and migrating Voice from PBX to MS-Teams Experience and Knowledge on MS Teams Direct routing and Operator Connect for Enterprise Voice Knowledge on Bandwidth planning optimal conferencing traffic capacity QOS. Design voice interoperability to PSTN QoS implementation for conference and other applications and integration with 3rd party telephony environment. Good knowledge on the DR High Availability setup Survivability (SBA) Experience in Skype/Teams Enterprise Voice troubleshooting (Gateways SBAs and complex issues) Knowledge on Enterprise Voice feature Call Admission Control Call Park Media Bypass and Auto attendant etc. Experience of configuring and troubleshooting Dial Plans Normalization Rules routes PSTN Usage and Voice Policy SIP SIP Trunk integration define outbound translation rules inbound dial plan SIP Call flow concepts knowledge of call methods. Knowledge on Call quality Dashboard Reporting and Management Overview of integrating Video endpoints Teams meeting rooms & Collaboration Bars configurations. Understanding of response groups Call queue Auto Attendant Dial by Extension workflows call routing methods. Good knowledge on Active Directory & Domain Services Certificate Authority Experience in Planning designing deploying and configuring SFB and Microsoft Teams Knowledge on deploying Remote Sites with Enterprise Voice Knowledge on Windows PowerShell & PowerShell Scripting Overseeing the schedule of activities to ensure planning and support is provided as requested. Perform preventative maintenance to resolve problem or identify resolution to appropriate vendor or manufacturer. Coordinates with Customer staff to determine AV/V and basic infrastructure requirements for any new conference room or media design. Willing to take Initiatives and Automation for improving efficiency & productivity able to work in a fast-paced environment with the ability to adapt to frequent changes and deliver the solution.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Company Description Triple I is a leading provider of AI-powered tools for ESG reporting automation. The company offers solutions that handle the entire ESG process, from real-time data integration to audit-ready reports aligned with industry regulations. Trusted by teams across various industries, Triple I simplifies ESG reporting to help enterprises move faster, stay compliant, and reduce workloads. Role Description We’re looking for a skilled AI Engineer to build a powerful AI-driven system that can analyze, transform, and standardize raw datasets into a predefined destination schema — with full language normalization, schema mapping, and intelligent data validation. This role is perfect for someone with deep expertise in data pipelines, NLP, and intelligent schema inference who thrives on creating scalable, adaptable solutions that go far beyond hardcoded logic. What You’ll Be Doing Develop a generalizable AI algorithm that transforms raw, unstructured (or semi-structured) source datasets into a standardized schema Automate schema mapping, data enrichment, PK/FK handling, language translation, and duplicate detection Build logic to flag unresolved data, generate an UnresolvedData_Report, and explain confidence or failure reasons Ensure all outputs are generated in English only, regardless of input language Experiment with 2–3 AI/ML approaches (e.g. NLP models, rule-based logic, transformers, clustering) and document tradeoffs Deliver all outputs (destination tables) in clean, validated formats (CSV/XLSX) Maintain detailed documentation of preprocessing, validation, and accuracy logic Key Responsibilities Design AI logic to dynamically extract, map, and organize data into 10+ destination tables Handle primary key/foreign key relationships across interconnected tables Apply GHG Protocol logic to assign Scope 1, 2, or 3 emissions automatically based on activity type Build multilingual support: auto-translate non-English input and ensure destination is 100% English Handle duplicate and conflicting records with intelligent merging or flagging logic Generate automated validation logs for transparency and edge case handling

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies