Jobs
Interviews

18008 Tuning Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

We’re looking for an experienced Audio Engineer to join MSIPL , one of India’s leading live audio production companies. This role is vital to delivering world-class sound across concerts and many live events. What You’ll Do: You’ll be responsible for end-to-end audio production—from equipment planning and system rigging to console setup and live mixing. You’ll work with cutting-edge gear like L-Acoustics and DiGiCo, leading on-site setups and ensuring flawless show execution. Success in This Role: Smooth coordination and execution of complex event setups Flawless live audio performance under pressure Proactive equipment management and client-facing professionalism Who You Are: An audio expert with at least 3 years of hands-on experience in large-format shows, proficient in digital consoles, system tuning, and wireless configuration. You thrive in fast-paced environments, are detail-oriented, and ready to travel as part of a dynamic team.

Posted 1 day ago

Apply

1.0 - 4.0 years

0 Lacs

Kerala, India

On-site

About the Role We are looking for a skilled and motivated Odoo Developer to join our team in Ernakulam. If you have 1 to 4 years of hands-on experience with Odoo and a passion for building tailored ERP solutions, this could be the perfect opportunity for you. You’ll collaborate with functional consultants and business analysts to design, develop, and implement Odoo modules that align with a wide range of business needs. Key Responsibilities Develop and customize Odoo modules for core business areas such as Sales, Purchase, Inventory, Accounting, CRM, and HR. Enhance existing functionalities and create new features based on project specifications. Work closely with cross-functional teams to understand and translate technical and functional requirements. Write clean, well-structured code using Python, XML, and Odoo development best practices. Design custom reports, dashboards, and integrate Odoo with external systems. Perform debugging, testing, and performance tuning to ensure high-quality outputs. Manage database operations including data migration, upgrades, and issue resolution. Provide ongoing support and continuous improvements for deployed Odoo solutions. Requirements 1 to 4 years of experience in Odoo development. Strong command of Python, XML, JavaScript, and PostgreSQL. Deep understanding of Odoo’s architecture, ORM, and modular structure. Experience working with both Odoo Community and Enterprise editions. Familiarity with HTML, CSS, and RESTful APIs. Strong analytical, debugging, and problem-solving skills. Effective communication and teamwork abilities. Knowledge of Git or other version control systems is a plus.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team as a Senior Data Scientist, reporting directly to the Lead Data Scientist in India. You will play a crucial role in building, optimizing, and maintaining AI-ready data infrastructure for advanced Generative AI applications. Your focus will be on hands-on implementation of cutting-edge data extraction, curation, and metadata enhancement techniques for both text and numerical data. You will be a key contributor to the development of innovative solutions, ensuring rapid iteration and deployment, and supporting the Lead in achieving the team's strategic goals. What Will You Be Doing AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Mentorship: Act as a technical mentor and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 2+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 1+ years Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team to lead a new group in India, focused on creating and maintaining AI-ready data. As the point of contact in Mumbai, you will guide the local team and ensure seamless collaboration with our global counterparts. Your contributions will directly impact the development of innovative solutions used by industry leaders worldwide, supporting text and numerical data extraction, curation, and metadata enhancements to accelerate development and ensure rapid response times. You will play a pivotal role in transforming how our data are seamlessly integrated with AI systems, paving the way for the next generation of customer interactions. What Will You Be Doing Lead and Develop the Team: Oversee a team of data scientists in Mumbai. Mentoring and guiding junior team members, fostering their professional growth and development. Strategic Planning: Develop and implement strategic plans for data science projects, ensuring alignment with the company's goals and objectives. AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced Generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Leadership: Act as a technical leader and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Leadership Experience: Proven track record in leading and mentoring data science teams, with a focus on strategic planning and operational excellence. Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 5+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 2+ years of Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Gurgaon Rural, Haryana, India

On-site

Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Dear Candidate, Please find the below JD for your reference: Strong experience with Kotlin Multiplatform (KMP) and Kotlin language fundamentals. · Proficiency in native Android and iOS development. · Experience with shared code architecture, dependency injection, and modularization. · Familiarity with Ktor, SQLDelight, Coroutines, and Multiplatform libraries. · Understanding of RESTful APIs, JSON, and secure data transmission. · Experience with BLE/NFC integrations and sensor-based interfaces. · Knowledge of cryptographic APIs and secure storage mechanisms. · Strong debugging, profiling, and performance tuning skills. · Experience publishing apps to Google Play Store and Apple App Store.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Title: Senior Software Developer As a Senior Software Developer (.NET), you will play a critical role in designing, developing, and deploying scalable, secure enterprise applications aligned with Inevia’s business objectives. This role ensures high performance, maintainability, and reliability of applications built using .NET technologies, microservices, and SQL Server. You will work closely with cross-functional teams, including product owners and stakeholders, to deliver innovative solutions while mentoring team members in a collaborative Agile environment. The candidate must be detail-oriented, organized, and capable of managing multiple priorities in a fast-paced environment. Responsibilities: Application Design & Development: Design, develop, and maintain robust web applications using C#, .NET Core, and .NET 6/7/8. Develop reusable components, services, and libraries following clean coding practices. Build, consume, and secure RESTful APIs and microservices. Integrate with Angular-based frontend applications for seamless backend-frontend communication. Ensure adherence to architecture principles and coding standards. System Optimization, Monitoring, and Quality: Perform application performance tuning and optimization. Conduct unit and integration testing, and participate in code reviews. Ensure high availability, scalability, and reliability of applications. Implement robust logging and monitoring mechanisms. Maintain observability and troubleshooting capabilities across environments. Database and Integration: Write optimized SQL queries, stored procedures, and functions in SQL Server. Collaborate on schema design and query performance tuning. Use ORM tools like Entity Framework Core and Dapper for data access. CI/CD and DevOps: Participate in Agile ceremonies and sprint activities. Support CI/CD pipeline setup using Azure DevOps. Participate in containerization using Docker and deployment on cloud platforms. Manage source code repositories and branching strategies. Troubleshooting and Support: Investigate and resolve issues across development, staging, and production environments. Analyze logs and telemetry data to identify root causes and implement fixes. Collaboration and Communication: Collaborate with development teams and stakeholders to gather and clarify requirements. Mentor developers by providing guidance and technical support. Qualifications: Education and Experience: Bachelor’s degree in Computer Science, IT, Engineering, or a related field 5+ years of professional experience in .NET development. Proven experience in building enterprise-grade web applications and APIs. Knowledge and Skills: Expertise in C#, .NET Core, .NET 6/7/8. Strong knowledge of Microservices architecture, RESTful APIs, asynchronous programming, and authentication mechanisms (JWT, OAuth2). Hands-on experience with SQL Server and complex query writing. Familiarity with Entity Framework Core, LINQ, and clean architecture principles. Experience with version control systems such as Azure DevOps and Git Knowledge of cloud technologies, preferably Azure. Exposure to unit testing and test-driven development (TDD). Knowledge of Angular frontend is a plus. Benefits: Opportunity to work on scalable enterprise applications and backend architecture Room for professional growth and learning. Competitive compensation package. Additional Information: This is a full-time position located in Navi Mumbai. Inevia is an equal opportunity employer and encourages applications from candidates of all backgrounds and experiences.

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Responsibilities Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

On-site

We are seeking a skilled and proactive Platform Lead with strong Snowflake expertise and AWS cloud exposure to lead the implementation and operational excellence of a scalable, multitenant modern data platform for a leading US-based marketing agency serving nonprofit clients. This role requires hands-on experience in managing Snowflake environments, supporting data pipeline orchestration, enforcing platform-level standards, and ensuring observability, performance, and security across environments. You will collaborate with architects, engineers, and DevOps teams to operationalize the platform’s design and drive its long-term stability and scalability in a cloud-native ecosystem. Job Specific Duties & Responsibilities: Lead the technical implementation and stability of the multitenant Snowflake data platform across dev, QA, and prod environments Design and manage schema isolation, role-based access control (RBAC), masking policies, and cost-optimized Snowflake architecture for multiple nonprofit tenants Implement and maintain CI/CD pipelines for dbt, Snowflake objects, and metadata-driven ingestion processes using GitHub Actions or similar tools Develop and maintain automation accelerators for data ingestion, schema validation, error handling, and onboarding new clients at scale Collaborate with architects and data engineers to ensure seamless integration with source CRMs, ByteSpree connectors, and downstream BI/reporting layers Monitor and optimize performance of Snowflake workloads (e.g., query tuning, warehouse sizing, caching strategy) to ensure reliability and scalability Establish and maintain observability and monitoring practices across data pipelines, ingestion jobs, and platform components (e.g., error tracking, data freshness, job status dashboards) Manage infrastructure-as-code (IaC), configuration templates, and version control practices across the data stack Ensure robust data validation, quality checks, and observability mechanisms are in place across all platform services Support incident response, pipeline failures, and technical escalations in production, coordinating across engineering and client teams Contribute to data governance compliance by implementing platform-level policies for PII, lineage tracking, and tenant-specific metadata tagging Required Skills, Experience & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field 6+ years of experience in data engineering or platform delivery, including 3+ years of hands-on Snowflake experience in production environments Proven expertise in building and managing multi tenant data platforms, including schema isolation, RBAC, and masking policies Solid knowledge of CI/CD practices for data projects, with experience guiding pipeline implementations using tools like GitHub Actions Hands-on experience with dbt, SQL, and metadata-driven pipeline design for large-scale ingestion and transformation workloads Strong understanding of AWS cloud services relevant to data platforms (e.g., S3, IAM, Lambda, CloudWatch, Secrets Manager) Experience optimizing Snowflake performance, including warehouse sizing, caching, and cost control strategies Familiarity with setting up observability frameworks, monitoring tools, and data quality checks across complex pipeline ecosystems Proficient in infrastructure-as-code (IaC) concepts and managing configuration/versioning across environments Awareness of data governance principles, including lineage, PII handling, and tenant-specific metadata tagging

Posted 1 day ago

Apply

6.0 years

8 - 12 Lacs

India

Remote

📍 Location: Remote (India) 📅 Start Date: ASAP 🔹 Type: Contract / Full-time (Flexible) 🏢 About The Company We deliver innovative solutions that help businesses accelerate performance across application development , BPO , data services , and professional services . Our mission is to improve efficiency, reduce costs, increase profitability, and shorten time-to-market for our clients. 📌 Role Overview We are looking for a skilled SQL Scripter with hands-on experience in Flexera database environments . The role involves designing, developing, and optimizing SQL scripts for FlexNet Manager Suite , with a strong focus on reporting, data analysis, and automation. 🛠️ Key Responsibilities Develop and maintain SQL scripts for: Data extraction Custom reporting Database automation within Flexera Execute SQL batches and build reports in FlexNet Manager Suite (FNMS) Optimize and troubleshoot complex SQL queries Collaborate with DBAs and developers to ensure data integrity and performance Support database backup, recovery, and security procedures Align all development with Flexera schema standards and best practices Document all SQL scripts and database-related workflows ✅ Qualifications 3–6 years of hands-on experience as a SQL Developer or Scripter Expert-level knowledge of SQL (queries, stored procedures, triggers, functions) Experience working with Flexera, especially FlexNet Manager Suite Familiarity with database administration and performance tuning Knowledge of PowerShell or Python for automation (a plus) Strong problem-solving and debugging skills Self-motivated and able to work both independently and in a team environment 🎯 Ideal Candidate Has worked on Flexera or licensing compliance tools Can handle SQL-heavy environments with minimal supervision Understands the importance of clean, well-documented scripts 📩 Interested? Send your CV to garima.s@zorbaconsulting.in with subject line: SQL Scripter – Flexera Application Skills: automation,data analysis,python,reporting,powershell,flexnet manager suite,sql scripter,sql,flexera

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

About Company: Our Client is one of the world's fastest-growing AI companies accelerating the advancement and deployment of powerful AI systems. Client helps customers in two ways: Working with the world’s leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies. Powering this growth is Client talent cloud—an AI-vetted pool of 4M+ software engineers, data scientists, and STEM experts who can train models and build AI applications. All of this is orchestrated by ALAN—our AI-powered platform for matching and managing talent, and generating high-quality human and synthetic data to improve model performance. ALAN also accelerates workflows for model and agent evals, supervised fine-tuning, reinforcement learning, reinforcement learning with human feedback, preference-pair generation, benchmarking, data capture for pre-training, post-training, and building AI applications. Client—based in San Francisco, California—was named #1 on The Information's annual list of "Top 50 Most Promising B2B Companies," and has been profiled by Fast Company, TechCrunch, Reuters, Semafor, VentureBeat, Entrepreneur, CNBC, Forbes, and many others. Client leadership team includes AI technologists from Meta, Google, Microsoft, Apple, Amazon, X, Stanford, Caltech, and MIT. Job Title: Python Developer Location: Remote Note: Candidate should be comfortable to work for US Shifts/Night Shifts Interview Mode: Virtual (Two rounds of interviews (60 min technical + 30 min technical & cultural discussion) Client: Turing Experience: 5+ yrs Job Type : Contract to hire. Notice Period:- Immediate joiners. Roles and Responsibilities: Analyze and triage GitHub issues across trending open-source libraries. Set up and configure code repositories, including Dockerization and environment setup. Evaluating unit test coverage and quality. Modify and run codebases locally to assess LLM performance in bug-fixing scenarios. Collaborate with researchers to design and identify repositories and issues that are challenging for LLMs. Opportunities to lead a team of junior engineers to collaborate on projects. Required Skills: Minimum 5+ years of overall experience Strong experience with at least one of the following languages: Python Proficiency with Git, Docker , and basic software pipeline setup. Ability to understand and navigate complex codebases. Comfortable running, modifying, and testing real-world projects locally. Experience contributing to or evaluating open-source projects is a plus.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: React.Js Developer Location: Pune Experience: 5+ yrs Job Type: Contract to hire(Min 1+ yr) Notice Period: Immediate joiners Job Description: 5 years hands on experience in ReactJS Nodejs React class based component and life cycle methods React function based component and React Hooks methods Redux framework and state management Express JS middleware authentication and authorization JEST Enzyme for unit test case Styled component in CSS HTTP network request REST API Proficient in HTML responsive CSS JavaScript JSON data handling Proficient in asynchronous programming Promise asyncawait callback Should be good with debugging in JavaScript Hands on experience for Jenkin GIT SonarQube Checkmarks Nexus Goods to have performance tuning MongoDB

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a skilled and hands-on AI Engineer to design, develop, and deploy an in-house AI assistant powered by LLaMA 3 and integrated with our MS SQL-based ERP system (4QT ERP) . This role includes responsibility for setting up LLM infrastructure , voice input (Whisper) , natural language to SQL translation , and delivering accurate, context-aware responses to ERP-related queries. Key Responsibilities: Setup and deploy LLaMA 3 (8B/FP16) models using llama-cpp-python or Hugging Face Integrate the AI model with FastAPI to create secure REST endpoints Connect with MS SQL database and design query logic for ERP modules (Sales, Payments, Units, etc.) Implement prompt engineering or fine-tuning (LoRA) to improve SQL generation accuracy Build a user-facing interface (React or basic web UI) for interacting via text or voice Integrate Whisper (OpenAI) or any STT system to support voice commands Ensure model responses are secure, efficient, and auditable (only SELECT queries allowed) Supervise or perform supervised fine-tuning with custom ERP datasets Optimize for performance (GPU usage) and accuracy (prompt/RAG tuning) Must-Have Skills: Strong experience with LLM deployment (LLaMA 3, Mistral, GPT-type models) Solid Python development experience using FastAPI or Flask SQL knowledge (esp. MS SQL Server ) – must know how to write and validate queries Experience with llama-cpp-python , Hugging Face Transformers, and LoRA fine-tuning Familiarity with LangChain or similar LLM frameworks Understanding of Whisper (STT) or equivalent Speech-to-Text tools Experience working with GPU inference (NVIDIA 4070/5090 etc.)

Posted 1 day ago

Apply

8.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Oracle DBA Location: Noida Experience: 8-15 Years Required Skills Hands on experience working with Oracle database activities, SQL, OEM, Golden Gate Replication (CDC & BDA), RAC/EXA Setup. Expertise in SQL Profiling/ Performance Tuning, Database Monitoring, Backup and Restore, Data guard, Grid control toolset, etc. Responsible for technical database, support of Infrastructure, Applications & other components & processes. Participate in Planning, Development of specifications, & other supporting documentation & processes. Knowledge of Finance/banking industry, Terminology, Data & Data structures is add-on. Knowledge of SQL server as well as Oracle databases. Knowledge of Identity Framework. Experienced technical knowledge in specialty area with basic knowledge of complementary infrastructures. A fast learner with ability to dive into new products and technologies, develop subject matter expertise and drive projects to completion. A team player with good written and verbal communication skills that can mentor other members in the production support group. The candidates having experience & Knowledge and experience in scripting language ksh, Perl, etc. would be preferred for this role. Understanding ITIL processes. Utilizing monitoring tools effectively.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Introduction Db2 is a world class relational database!. 100% of the Fortune 100 companies and more than 80% of the Fortune 500 group have one or more members of the DB2 family installed helping to run the business. IBM is continuing to modernize Db2 to be cloud-native bringing new features and capabilities while delivery mission critical features that the world depends on. Db2 is supported across several hyperscalers like IBM Cloud, AWS and Azure as well in a number deployment models including self managed and fully managed SaaS along with tight integration with cloud native services. The Db2 engine specifically is blazingly fast and is written in C/C++ with deep OS and IO subsystem integrations. It powers low-latency transactions and real-time analytics at scale for the worlds most complex workloads.Seeking new possibilities and always staying curious, we are a team dedicated to creating the world's leading AI-powered, cloud-native software solutions for our customers. Our renowned legendary solutions create endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. Your Role And Responsibilities "As a key member of our dynamic team, you will play a vital role in crafting exceptional software experiences. Your responsibilities will encompass the design and implementation of innovative features, fine-tuning and sustaining existing code for optimal performance, and guaranteeing top-notch quality through rigorous testing and debugging. Collaboration is at the heart of what we do, and you'll be working closely with fellow developers, designers, and product managers to ensure our software aligns seamlessly with user expectations. The role seeks good levels of personal organisation and the ability to work well within a distributed global team in a fast paced and exciting environment. You will be office based, working with senior software engineers who will help you integrate into the team, the department and wider IBM. You will be joining a development squad following Design Thinking and Agile principles where you are expected to collaboratively develop creative solutions. The work can be varied, flexibility to learn new technologies and skills is key as we look look to help grow your career within IBM. A positive attitude and a passion to succeed is essential in joining a high performing software development team at IBM. " Preferred Education Master's Degree Required Technical And Professional Expertise A minimum of 8+ years of experience in software development A minimum of 6+ years of experience in either Golang, Python or C, C++ and API Development A minimum of 1 year experience in programming with Python Experience with Operating System Concepts (serialization, concurrency, multi-threading) and Data Structures (arrays, pointers, hash buckets) Experience with SQL Databases (Db2, Oracle, SQL Server, PostgreSQL, MySQL, etc) Experience with software development best practices including coding standards, code reviews, source control management, build processes, and testing Demonstrated communication, teamwork, and problem-solving skills Experience with cloud-based technologies, showcasing familiarity with modern cloud ecosystems and tools AWS /AZURE/ IBM Cloud 5+ years of experience with Cloud/Container skills: Familiarity with cloud and container technologies, including Docker, Kubernetes, Red Hat OpenShift, etc. Preferred Technical And Professional Experience Knowledge of and/or experience with database design and query optimization Knowledge of and/or experience with optimization problems and the algorithms to solve them, such as dynamic programming Knowledge serverless and stateless computing services like Lambda or Code Engine. Experience using Linux operating systems Security domain expertise Knowledge of version control systems such as GitHub Demonstrated analytical and problem solving skills Familiarity with distributed filesystems and data storage techniques

Posted 1 day ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Introduction Db2 is a world class relational database!. 100% of the Fortune 100 companies and more than 80% of the Fortune 500 group have one or more members of the DB2 family installed helping to run the business. IBM is continuing to modernize Db2 to be cloud-native bringing new features and capabilities while delivery mission critical features that the world depends on. Db2 is supported across several hyperscalers like IBM Cloud, AWS and Azure as well in a number deployment models including self managed and fully managed SaaS along with tight integration with cloud native services. The Db2 engine specifically is blazingly fast and is written in C/C++ with deep OS and IO subsystem integrations. It powers low-latency transactions and real-time analytics at scale for the worlds most complex workloads.Seeking new possibilities and always staying curious, we are a team dedicated to creating the world's leading AI-powered, cloud-native software solutions for our customers. Our renowned legendary solutions create endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. Your Role And Responsibilities "As a key member of our dynamic team, you will play a vital role in crafting exceptional software experiences. Your responsibilities will encompass the design and implementation of innovative features, fine-tuning and sustaining existing code for optimal performance, and guaranteeing top-notch quality through rigorous testing and debugging. Collaboration is at the heart of what we do, and you'll be working closely with fellow developers, designers, and product managers to ensure our software aligns seamlessly with user expectations. The role seeks good levels of personal organisation and the ability to work well within a distributed global team in a fast paced and exciting environment. You will be office based, working with senior software engineers who will help you integrate into the team, the department and wider IBM. You will be joining a development squad following Design Thinking and Agile principles where you are expected to collaboratively develop creative solutions. The work can be varied, flexibility to learn new technologies and skills is key as we look look to help grow your career within IBM. A positive attitude and a passion to succeed is essential in joining a high performing software development team at IBM. " Preferred Education Master's Degree Required Technical And Professional Expertise A minimum of 5 years of experience in software development A minimum of 3 years of experience in C/C++ programming Experience with Operating System Concepts (serialization, concurrency, multi-threading) and Data Structures (arrays, pointers, hash buckets) Experience with SQL Databases (Db2, Oracle, SQL Server, PostgreSQL, MySQL, etc) Experience with software development best practices including coding standards, code reviews, source control management, build processes, and testing Demonstrated communication, teamwork, and problem-solving skills Preferred Technical And Professional Experience Knowledge of and/or experience with optimization problems and the algorithms to solve them, such as dynamic programming Experience using Linux operating systems Security domain expertise Knowledge of version control systems such as GitHub Demonstrated analytical and problem solving skills Familiarity with distributed filesystems and data storage techniques

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Location : Remote Experience : 4-6 years Position : Gen-AI Developer (Hands-on) Technical Requirements: Hands-on Data Science , Agentic AI, AI/Gen AI / ML /NLP Azure services (App Services, Containers, AI Foundry, AI Search, Bot Services) Experience in C# Semantic Kernel Strong background in working with LLMs and building Gen AI applications AI agent concepts .NET Aspire End-to-end environment setup for ML/LLM/Agentic AI (Dev/Prod/Test) Machine Learning & LLM deployment and development Model training, fine-tuning, and deployment Kubernetes, Docker, Serverless architecture Infrastructure as Code (Terraform, Azure Resource Manager) Performance Optimization & Cost Management Cloud cost management & resource optimization, auto-scaling Cost efficiency strategies for cloud resources MLOps frameworks (Kubeflow, MLflow, TFX) Large language model fine-tuning and optimization Data pipelines (Apache Airflow, Kafka, Azure Data Factory) Data storage (SQL/NoSQL, Data Lakes, Data Warehouses) Data processing and ETL workflows Cloud security practices (VPCs, firewalls, IAM) Secure cloud architecture and data privacy CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) Automated testing and deployment for ML models Agile methodologies (Scrum, Kanban) Cross-functional team collaboration and sprint management Experience with model fine-tuning and infrastructure setup for local LLMs Custom model training and deployment pipeline design Good communication skills (written and oral)

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Senior Applied Scientist Bangalore, Karnataka, India Date posted Aug 01, 2025 Job number 1854651 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview Do you want to be part of a team which delivers innovative products and machine learning solutions across Microsoft to hundreds of millions of users every month? Microsoft Turing team is an innovative engineering and applied research team working on state-of-the-art deep learning models, large language models and pioneering conversational search experiences. The team spearheads the platform and innovation for conversational search and the core copilot experiences across Microsoft’s ecosystem including BizChat, Office and Windows. As a Senior Applied Scientist in the Turing team, you will be involved in tight timeline-based hands on data science activity and work, including training models, creating evaluation sets, building infrastructure for training and evaluation, and more. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. 3+ years of industrial experience coding in C++, C#, C, Java or Python. Prior experience with data analysis or understanding, looking at data from a large scale systems to identify patterns or create evaluation datasets. Familiarity with common machine learning, deep learning frameworks and concepts, using use of LLMs, prompting. Experience in pytorch or tensorflow is a bonus. Ability to communicate technical details clearly across organizational boundaries. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check : This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Solid ability and effectiveness working end-to-end in a challenging technical problem domain (plan, design, execution, continuous release, and service operation). Some prior experience in applying deep learning techniques and drive end-to-end AI product development (Search, Recommendation, NLP, Document Understanding, etc). Prior experience with Azure or any other cloud pipelines or execution graphs. Self-driven, results oriented, high integrity, ability to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Customer/End-result/Metrics driven in design and development. Keen ability and motivation to learn, enter new domains, and manage through ambiguity. Solid publication track records at top conferences like ACL, EMNLP, SIGKDD, AAAI, WSDM, COLING, WWW, NIPS, ICASSP, etc. #M365Core Responsibilities As an Applied Scientist on our team, you'll be responsible for and will engage in: Driving projects from design through implementation, experimentation and finally shipping to our users. This requires deep dive into data to identify gaps, come up with heuristics and possible solutions, using LLMs to create the right model or evaluation prompts, and setup the engineering pipeline or infrastructure to run them. Come up with evaluation techniques, datasets, criteria and metrics for model evaluations. These are often SOTA models or metrics / datasets. Hands on own the fine-tuning, use of language models, including dataset creation, filtering, review, and continuous iteration. This requires working in a diverse geographically distributed team environment where collaboration and innovation are valued. You will have an opportunity for direct impact on design, functionality, security, performance, scalability, manageability, and supportability of Microsoft products that use our deep learning technology. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

The role of Anti Money Laundering (AML) Scenario Development & Enhancement (SDE) Statistician is part of Strategic Business Solutions group of AIM, based in Bangalore and reporting into the AVP/VP leading the team. The scope of work includes all aspects of analysis performed by the team within different projects: Threshold Tuning, Segmentation and data modeling/validation efforts depending on current needs and project plans. A primary area of focus for this position will be working on threshold tuning for Optimization, developing Logistic Regression Model to predict customer behaviour,identifying anomalies in transaction and Customer behaviour,Outlier detection,ATL threshold tuning, Segment customers into homogenous groups using clustering,Logistic Regression Model performance Review etc. while maintaining the flexibility to switch amongst work streams based on business needs. The SDE statistician will follow the globally consistent methodology, but is expected to have a high level of initiative and creativity and suggest enhancements to the current methodologies. The role requires working closely with business partners based in other geographies that Citi operates in (e.g., U.S., APAC, and EMEA). Requirements include a background in analysis using data bases, warehouses, data processing; experience with statistics and data mining. Experience and knowledge in banking and finance especially in the AML area will be a plus. In addition, the ability to read and create formal documentation is highly desirable. Responsibilities: A primary area of focus for this position will be working on threshold tuning for Optimization, developing Logistic Regression Model to predict customer behaviour,identifying anomalies in transaction and Customer behaviour,Outlier detection,ATL threshold tuning, Segment customers into homogenous groups using clustering,Logistic Regression Model performance Review etc. while maintaining the flexibility to switch amongst work streams based on business needs. Apply quantitative and qualitative data analysis methods; prepare statistical and non-statistical data exploration and advanced statistical analysis to support the threshold tuning or segmentation work streams. Validate data, identify data quality issues (if any), and work with Technology to address them. Analyze and interpret data reports, draw conclusions and make recommendations answering specific business needs. Automate data extraction and data preprocessing tasks. Perform ad hoc data analyses. Design and maintain complex data manipulation processes. Provide consistent documentation and presentations. Develop new transaction monitoring scenarios based on emerging Financial Crime Risk Document solutions and present results in a simple comprehensive way to non-technical audience, as well as write more formal documentation using statistical vocabulary. Generate new ideas, concepts and models to improve methods of obtaining and evaluating quantitative and qualitative data. Identify relationships and trends in data, as well as any factors that could affect the results of research. Question and validate assumptions. Escalate identified risks and sensitive areas in terms of methodology and processes. Qualifications: 4-6 Yrs. Experience in Analytics Industry Previous experience with financial services companies (retail banking, small business banking, commercial, institutional, private banking) Experience in threshold tuning for Optimization, developing Logistic Regression Model to predict customer behaviour,identifying anomalies in transaction and Customer behaviour,Outlier detection,ATL threshold tuning, Segment customers into homogenous groups using clustering,Logistic Regression Model performance Review etc. while maintaining the flexibility to switch amongst work streams based on business needs Good Knowledge in SAS,SQL,Hive.Knowledge in Python is preferred but not mandatory Strong statistics and data analytics academic background and knowledge of quantitative methods Highly-skilled and good knowledge of MS Excel .VBA experience is a plus Experience in reporting the results of analysis in clear written form, and in presenting the findings during meetings and conference calls Education: Masters in a numerate subject such as Mathematics, Operational Research, Business Administration, Economics etc. from Premier Institute or a track record of performance that demonstrates this ability. This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Specialized Analytics (Data Science/Computational Statistics) ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

7.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Infrastructure Engineer Project Role Description : Assist in defining requirements, designing and building data center technology components and testing efforts. Must have skills : PostgreSQL Administration Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Infrastructure Engineer, you will assist in defining requirements, designing and building data center technology components and testing efforts. A typical day involves collaborating with various teams to ensure that the infrastructure meets the necessary specifications and standards. You will engage in problem-solving activities, contribute to design discussions, and support the implementation of technology solutions that enhance operational efficiency and reliability. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in PostgreSQL Administration. - Strong understanding of database design and optimization techniques. - Experience with backup and recovery strategies for PostgreSQL databases. - Familiarity with performance tuning and monitoring tools. - Knowledge of security best practices for database management. Additional Information: - The candidate should have minimum 7.5 years of experience in PostgreSQL Administration. - This position is based in Pune. - A 15 years full time education is required.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Senior MSSQL Database Administrator Location : Pune, India (On-site) Employment Type : Full-time Job Overview We are seeking a highly skilled and experienced Senior MSSQL Database Administrator to join our enterprise technology team at Pansoft Technologies LLC. This is a full-time, on-site role in Pune. You will be responsible for the comprehensive administration, design, troubleshooting, and management of Microsoft SQL Server databases, with a strong focus on cluster installation, high availability (HA), disaster recovery (DR), performance tuning, and replication. You will ensure the continuous availability, performance, and integrity of critical database systems. Key Responsibilities Database Administration and Troubleshooting : Provide expert-level database administration, including installation, configuration, patching, and upgrades of MSSQL Server instances. Proactively monitor database health, performance, and capacity, identifying and resolving complex issues. Perform root cause analysis for database-related incidents and implement preventive measures. Database Design And Management Collaborate on database design and schema modifications to support application development and optimization. Manage database objects, user permissions, and security configurations. Implement and enforce database best practices and standards. Database Replication Design, configure, and maintain various types of MSSQL Server replication (Snapshot, Transactional, Merge) to ensure data synchronization and consistency. Monitor replication lag and troubleshoot replication agents and distributors. Cluster Installation And Setup Lead Cluster Configuration: Install and configure SQL Server clustering software, specifically focusing on SQL Server Always On Availability Groups. Ensure Node Configuration : Configure and prepare each node in the cluster environment, verifying correct configurations, storage paths, and required access for each SQL Server instance. Cluster High Availability (HA) and Disaster Recovery (DR) Setup : Implement Failover Mechanisms : Configure automatic failover within SQL Server Always On Availability Groups to ensure high availability in case of node failure. Design and configure Disaster Recovery Solutions : Implement robust disaster recovery using Always On secondary replicas to geographically remote sites for minimal downtime and data loss. Develop comprehensive Backup Strategies for Clustered Databases : Ensure accurate configuration of full, differential, and transaction log backups for the entire cluster, including cloud-based backups (e.g., Backup to URL) and replication to secondary nodes. Database Instance And Resource Management Manage CPU, memory, and I/O allocation across cluster nodes, configuring load balancing within SQL Server Always On to distribute query traffic across secondary replicas for optimal read performance. Conduct extensive Clustered Database Performance Tuning : Monitor and optimize the performance of the SQL Server cluster, adjusting settings like MaxDOP and identifying bottlenecks. Database Synchronization And Replication Maintain data consistency across all nodes in the cluster, managing replication lag in SQL Server Always On and ensuring real-time synchronization. Configure and maintain data replication between primary and secondary cluster nodes for availability and redundancy. Handle and resolve any data conflicts that may arise during replication, ensuring continuous data consistency across all cluster nodes. Database Cluster Monitoring And Alerts Implement continuous monitoring of the SQL Server cluster for performance metrics, system health, and failover status, including disk space, CPU usage, memory consumption, and error/warning logs. Establish robust alerting mechanisms (e.g., SQL Server Agent, custom scripts) to notify administrators of failures, performance degradation, or other critical issues within the cluster (e.g., replication delays, node failure). Qualifications Database Administration and Troubleshooting skills. Experience in Database Design and Databases management. Experience in Replication of databases. Excellent problem-solving and communication skills. Ability to work effectively in a team environment. Relevant certifications in MSSQL or database administration (e.g., MCSA: SQL 2016 Database Administration, MCSE: Data Management and Analytics). Bachelor's degree in Computer Science, Information Technology, or a related field. (ref:hirist.tech)

Posted 1 day ago

Apply

7.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 1 day ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : SAP Basis Administration Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. Your typical day involves collaborating with teams to ensure the successful design of applications and meeting business needs. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the team in implementing innovative solutions - Ensure timely delivery of projects - Mentor junior team members for their professional growth Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Basis Administration - Strong understanding of SAP architecture and system landscapes - Experience in SAP system installations, upgrades, and performance tuning - Knowledge of SAP security and authorization concepts - Hands-on experience in SAP system monitoring and troubleshooting Additional Information: - The candidate should have a minimum of 5 years of experience in SAP Basis Administration - This position is based at our Nagpur office - A 15 years full-time education is required

Posted 1 day ago

Apply

4.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Experience : 4+ years in backend software development with strong exposure to asynchronous programming models Job Summary We are seeking a highly skilled and experienced Senior Software Engineer with strong expertise in Java, asynchronous programming, Spring Boot, Vert.x, and MongoDB. The ideal candidate will play a key role in building scalable, high-performance backend services for modern enterprise applications. You will be expected to drive design discussions, contribute to architecture, and mentor junior developers. Key Responsibilities Design and implement asynchronous, non-blocking microservices and APIs using Vert.x and Spring Boot Optimize application performance, scalability, and reliability in high-throughput environments Model and manage data in MongoDB, ensuring efficient schema design and indexing Collaborate with DevOps, frontend, and QA teams to deliver end-to-end solutions Perform code reviews, write unit and integration tests, and ensure best practices across the codebase Troubleshoot production issues and participate in on-call rotations (if required) Mentor and guide junior developers and contribute to internal knowledge-sharing sessions Work in Agile/Scrum teams and contribute to sprint planning, estimations, and retrospectives Required Skills And Experience 4+ years of backend development experience in Java Strong expertise in asynchronous programming, event-driven systems, and non-blocking I/O Deep understanding of Vert.x, including event bus, workers, and reactive patterns Hands-on experience with Spring Boot microservices architecture Proficient in MongoDB, aggregation framework, and schema design Familiarity with RESTful APIs, OpenAPI/Swagger specifications Experience with message brokers like Kafka or RabbitMQ is a plus Strong debugging and performance tuning skills Solid grasp of software engineering principles (OOP, design patterns, SOLID) Preferred Qualifications Experience in building SaaS platforms or fintech/banking domain systems Knowledge of reactive frameworks like Project Reactor or RxJava Familiarity with containerized deployments using Docker and Kubernetes Exposure to CI/CD tools (Jenkins, GitLab CI, etc.) Education : Bachelor's or Master's degree in Computer Science, Engineering, or related field (ref:hirist.tech)

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies