Home
Jobs

588 Parsing Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Lead Python Engineer – Backend & AI Integrations Location: Gurgaon Working Days: Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–8 years Function: Backend Engineering | AI Platform Integration | Scalable Systems About Darwix AI Darwix AI is one of India’s fastest-growing GenAI SaaS companies powering real-time decision intelligence for enterprise revenue teams. Our platform transforms frontline performance through: Transform+: Live agent nudges & call intelligence Sherpa.ai: GenAI-powered multilingual sales coach Store Intel: Computer vision for in-store sales analysis We serve market leaders across BFSI, real estate, and retail—including IndiaMart, Wakefit, GIVA, Sobha, and Bank Dofar. Our stack processes thousands of voice conversations daily, powers real-time dashboards, and delivers high-stakes nudges that directly impact multi-crore revenue pipelines. We are building at the intersection of voice AI, backend scale, and real-time analytics. You will play a key role in shaping the tech foundation that drives this mission. Role Overview We’re looking for a Lead Python Engineer to architect, own, and scale the core backend systems that power Darwix AI’s GenAI applications. You’ll work at the confluence of backend engineering, data pipelines, speech processing, and AI model integrations—supporting everything from real-time call ingestion to multi-tenant analytics dashboards. You will lead a high-performing engineering pod, collaborate with product, AI, and infra teams, and mentor junior engineers. This is a high-impact, ownership-first role with direct influence over product velocity, system performance, and enterprise reliability. Key Responsibilities 1. Backend Architecture & Infrastructure Design and maintain scalable APIs and backend systems using Python (FastAPI) Optimize data flow for speech-to-text transcription, diarization outputs, and call scoring workflows Build and maintain modular service components (STT, scoring engine, notification triggers) Manage asynchronous job queues (Celery, Redis) for large batch processing Ensure high availability, security, and scalability of backend systems across geographies 2. AI/ML Integration & Processing Pipelines Integrate with LLMs (OpenAI, Cohere, Hugging Face) and inference APIs for custom use cases Handle ingestion and parsing of STT outputs (WhisperX, Deepgram, etc.) Work closely with the AI team to productionize model outputs into usable product layers Manage embedding pipelines, RAG workflows, and retrieval caching across client tenants 3. Database & Data Engineering Design and maintain schemas across PostgreSQL, MongoDB, and TimescaleDB Optimize read/write operations for large call data, agent metrics, and dashboard queries Collaborate on real-time analytics systems used by enterprise sales teams Implement access controls and tenant isolation logic for sensitive sales data 4. Platform Reliability, Monitoring & Scaling Collaborate with DevOps team on infrastructure orchestration (Docker, Kubernetes, GitHub Actions) Set up alerting, logging, and auto-recovery protocols for uptime guarantees Drive version control and CI/CD automation for releases with minimal regression Support benchmarking, load testing, and latency reduction initiatives 5. Technical Leadership & Team Collaboration Mentor junior engineers, review pull requests, and enforce code quality standards Collaborate with product managers on scoping and technical feasibility Break down large tech initiatives into sprints and delegate effectively Take ownership of technical decisions and present trade-offs with clarity Required Skills & Experience 3–8 years of hands-on backend engineering experience, primarily in Python Strong grasp of FastAPI, REST APIs, job queues (Celery), and async workflows Solid experience with relational and NoSQL databases: PostgreSQL, MongoDB, Redis Familiarity with working on production systems involving large-scale API calls or streaming data Prior experience integrating 3rd-party APIs (e.g., OpenAI, CRM, VoIP, or transcription vendors) Working knowledge of Docker, CI/CD pipelines (GitHub Actions preferred), and basic infra scaling Experience working in high-growth SaaS or data-product companies Bonus Skills (Preferred, Not Mandatory) Experience with LLM applications, vector stores (FAISS, Pinecone), and RAG pipelines Familiarity with speech-to-text engines (WhisperX, Deepgram) and audio processing Prior exposure to multi-tenant SaaS systems with role-based access and usage metering Knowledge of OAuth2, webhooks, event-driven architectures Experience with frontend collaboration (Angular/React) and mobile APIs Contributions to open-source projects, technical blogs, or developer communities Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join our dynamic team as a Web Scraping Engineer and play a crucial role in driving our data-driven strategies. As a key player, you will develop and maintain innovative solutions to automate data extraction, parsing, and structuring from various online sources. Your expertise will empower our business intelligence, market research, and decision-making processes. If you are passionate about automation, dedicated to ethical practices, and have a knack for solving complex problems, we want you! Key Responsibilities Design, implement, and maintain web scraping solutions to collect structured data from publicly available online sources and APIs Parse, clean, and transform extracted data to ensure accuracy and usability for business needs Store and organize collected data in databases or spreadsheets for easy access and analysis Monitor and optimize scraping processes for efficiency, reliability, and compliance with relevant laws and website policies Troubleshoot issues related to dynamic content, anti-bot measures, and changes in website structure Collaborate with data analysts, scientists, and other stakeholders to understand data requirements and deliver actionable insights Document processes, tools, and workflows for ongoing improvements and knowledge sharing Requirements Proven experience in web scraping, data extraction, or web automation projects Proficiency in Python or similar programming languages, and familiarity with libraries such as BeautifulSoup, Scrapy, or Selenium Strong understanding of HTML, CSS, JavaScript, and web protocols Experience with data cleaning, transformation, and storage (e.g., CSV, JSON, SQL/NoSQL databases) Knowledge of legal and ethical considerations in web scraping, with a commitment to compliance with website terms of service and data privacy regulations Excellent problem-solving and troubleshooting skills Ability to work independently and manage multiple projects simultaneously Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) for scalable data solutions Familiarity with workflow automation and integration with communication tools (e.g., email, Slack, APIs) Background in market research, business intelligence, or related fields Skills: data extraction,data cleaning,beautifulsoup,business intelligence,web automation,javascript,web scraping,data privacy regulations,web protocols,selenium,scrapy,sql,data transformation,nosql,css,market research,automation,python,html Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Title: R&D Data Engineer About The Job At Sanofi, we’re committed to providing the next-gen healthcare that patients and customers need. It’s about harnessing data insights and leveraging AI responsibly to search deeper and solve sooner than ever before. Join our R&D Data & AI Products and Platforms Team as an R&D Data Engineer and you can help make it happen. What You Will Be Doing Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives. The R&D Data & AI Products and Platforms Team is a key team within R&D Digital, focused on developing and delivering Data and AI products for R&D use cases. This team plays a critical role in pursuing broader democratization of data across R&D and providing the foundation to scale AI/ML, advanced analytics, and operational analytics capabilities. As an R&D Data Engineer , you will join this dynamic team committed to driving strategic and operational digital priorities and initiatives in R&D. You will work as a part of a Data & AI Product Delivery Pod, lead by a Product Owner, in an agile environment to deliver Data & AI Products. As a part of this team, you will be responsible for the design and development of data pipelines and workflows to ingest, curate, process, and store large volumes of complex structured and unstructured data. You will have the ability to work on multiple data products serving multiple areas of the business. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofi’s Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience build products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renown leaders and academics in machine learning to further develop your skillsets. We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started? Main Responsibilities Data Product Engineering: Provide input into the engineering feasibility of developing specific R&D Data/AI Products Provide input to Data/AI Product Owner and Scrum Master to support with planning, capacity, and resource estimates Design, build, and maintain scalable and reusable ETL / ELT pipelines to ingest, transform, clean, and load data from sources into central platforms / repositories Structure and provision data to support modeling and data discovery, including filtering, tagging, joining, parsing and normalizing data Collaborate with Data/AI Product Owner and Scrum Master to share progress on engineering activities and inform of any delays, issues, bugs, or risks with proposed remediation plans Design, develop, and deploy APIs, data feeds, or specific features required by product design and user stories Optimize data workflows to drive high performance and reliability of implemented data products Oversee and support junior engineer with Data/AI Product testing requirements and execution Innovation & Team Collaboration Stay current on industry trends, emerging technologies, and best practices in data product engineering Contribute to a team culture of of innovation, collaboration, and continuous learning within the product team About You Key Functional Requirements & Qualifications: Bachelor’s degree in software engineering or related field, or equivalent work experience 3-5 years of experience in data product engineering, software engineering, or other related field Understanding of R&D business and data environment preferred Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Key Technical Requirements & Qualifications Proficiency with data analytics and statistical software (incl. SQL, Python, Java, Excel, AWS, Snowflake, Informatica) Deep understanding and proven track record of developing data pipelines and workflows Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Show more Show less

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role –Senior Gen AI Engineer Job Location - Hyderabad Mode of Interview - Virtual Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients Show more Show less

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

We are looking for talented backend software engineers with 1 to 3 years of experience in the design and development of highly scalable, multi-threaded, high-performance distributed systems, with a particular focus on low latency and real-time processing in the capital market trading systems. You will play a key role in developing an order-matching engine benchmarking solution for the stock exchange. If you're a self-driven engineer passionate about performance optimization and scalability challenges, we'd love to have you join our growing team. You'll work alongside multidisciplinary teams to build and iterate on solutions from concept to production, with real-world impact on the capital markets industry. Responsibilities End-to-End Project Ownership: Lead the design, development, and execution of benchmarking solutions for a trading system, ensuring optimal performance under high loads. System Scalability and Optimization: Fine-tune key system parameters (e. g., memory, socket buffer sizes) to ensure reliable performance at peak trading hours, with an emphasis on low latency and scalability. Performance Monitoring and Reporting: Implement centralized monitoring dashboards to track system health, identify bottlenecks, and produce detailed performance reports. ETI Integration and Reliable Networking: Work on integrating exchange trading interfaces (ETI) with order-matching engines, ensuring consistent performance across reliable UDP connections. Collaboration with SMEs: Work closely with client-side subject matter experts (SMEs) to understand system requirements, including hardware procurement and performance thresholds for given setups. Requirements Technologies: Golang, caching technologies such as Redis, Memcached, and low-level programming 1 to 3 years of hands-on experience in Golang: Deep technical knowledge and experience with developing high-performance, multi-threaded applications. Experience in large-scale distributed systems: Particularly those dealing with capital markets (stock trading) where order processing, low latency, and real-time system stability are critical. Networking and Performance Tuning: Strong knowledge in fine-tuning memory, socket buffer sizes (e. g., wmem, TPU), and other system-level configurations to ensure reliable and scalable performance under varying load conditions. Historical Data Parsing: Experience in analyzing and parsing historical trading data, time-warping, and stress-testing real-world trading scenarios. Monitoring and Benchmarking Tools: Experience in setting up centralized monitoring and benchmarking dashboards to evaluate system performance and identify optimization opportunities. Preferred Skills Experience with Stock Exchange Systems: Knowledge of stock exchange protocols like ETI and working with order/trade systems would be a big advantage. Caching Technologies: Experience with Redis or similar technologies for optimizing data access in high-throughput environments. Linux kernel Level: Experience deploying scalable systems in cloud and hybrid environments, with a solid understanding of the Linux kernel. Load Testing Expertise: Prior experience in load testing using Jmeter(or equivalent), particularly in capital markets, is highly desirable. Additional Qualifications Quick Learner: Ability to quickly understand new systems, protocols, and environments. Strong Communication Skills: Ability to articulate complex technical challenges and collaborate effectively with cross-functional teams. AWS Certified (Preferred): AWS Solutions Architect Associate or Professional certification is a plus. This job was posted by Shashank Patil from Oneture Technologies. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Acronis is revolutionizing cyber protection—providing natively integrated, all-in-one solutions that monitor, control, and protect the data that businesses and lives depend on. We are looking for a Senior Quality Assurance Engineer to join our mission to create a #CyberFit future and protect all data, applications and systems across any environment. The team works both on implementation of brand new features and fixing existing product-related issues. In Virtualization backup team, you will be working on various challenging issues from various fields, such as: Figuring out in-depth work of VMware ESXi, Hyper-V, Virtuozzo and other hypervisors to address platform-specific issues; Deep understanding of file systems management and parsing (FAT, NTFS, Ext, XFS, ReFS); Understanding the boot sequence of operating systems to ensure bootability across different virtualization platforms during VM backup and restore; Performing integration with Azure, Google, Oracle, Amazon clouds, XenServer platforms. We are seeking for a person who will help us to develop and enhance tools and applications that supply all Acronis products and provide seamless end-customer experience to thousands of users worldwide. We are looking for a highly motivated person with strong desire to learn. The position involves a substantial portion of "hands-on" work and requires an individual able to work independently with minimal supervision. What You'll Do Testing of client-server applications designed to cover backup/recovery of virtualization environments Analyse feature requirements, verify product functionality alignment with the requirements Design, deploy, install, provision, troubleshoot, and maintain of internal virtualization infrastructure used for testing. Close collaboration with development teams leaders and architects to understand product requirements. Develop, maintain and execute manual tests according to functional specifications and other design and product documentation. Work closely with other teams to assist with troubleshooting of virtualization-related problems: evaluate and analyse issues, performance, and other metrics in order to provide recommendations for product improvements. Interacting with automation team: preparing test analytics for automation scenarios; run and analyse automated test launch results Diagnose, solve and provide root cause analysis for application/hardware/OS/networking related issues. Analysing customers’ issues and investigate their root causes and participate in technical discussions with RnD team What You Bring 3+ years of experience as a QA Engineer Deep knowledge of Quality Assurance theory: principles, methodologies and techniques Strong understanding of website development methodologies and quality processes Understanding of REST/JSON web services Proficiency in both Linux and Windows operating systems with knowledge in installing, configuring and troubleshooting Knowledge of basic programming/scripting principles would be an additional advantage Experience in test-cases development and reporting Understanding of basic concepts of computer architecture, data structure and IT security Familiar with Virtualization systems Analytical mindset Detail oriented, efficient and organized. Upper-Intermediate English Ability to work independently as well as a part of a team Please submit your resume and application in English Who We Are Acronis is a global cyber protection company that provides natively integrated cybersecurity, data protection, and endpoint management for managed service providers (MSPs), small and medium businesses (SMBs), enterprise IT departments and home users. Our all-in-one solutions are highly efficient and designed to identify, prevent, detect, respond, remediate, and recover from modern cyberthreats with minimal downtime, ensuring data integrity and business continuity. We offer the most comprehensive security solution on the market for MSPs with our unique ability to meet the needs of diverse and distributed IT environments. A Swiss company founded in Singapore in 2003, Acronis offers over twenty years of innovation with 15 offices worldwide and more than 1800 employees in 50+ countries. Acronis Cyber Protect is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses. Our corporate culture is focused on making a positive impact on the lives of each employee and the communities we serve. Mutual trust, respect and belief that we can contribute to the world everyday are the cornerstones of our team. Each member of our “A-Team” plays an instrumental role in driving the success of our innovative and expanding business. We seek individuals who excel in dynamic, global environments and have a never give up attitude, contributing to our collective growth and impact. Acronis is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, marital status, national origin, physical or mental disability, medical condition, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, gender identity or expression, or any other characteristic protected by applicable laws, regulations and ordinances. Show more Show less

Posted 1 month ago

Apply

45.0 years

5 - 8 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-213123 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Apr. 29, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 45 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a highly skilled and experienced Test Automation Engineering Manager to lead our automation team. The ideal candidate will have expertise in data automation, especially with Databricks and AWS, and be skilled in search-related programs, Data catalog, and UI validation. you will play a pivotal role in shaping the quality and reliability of complex, search-driven applications that handle large-scale data ingestion and real-time querying. This is a highly hands-on leadership role , ideal for someone who enjoys diving deep into technical challenges while also mentoring and guiding QA strategies at scale. You will be responsible for defining and executing end-to-end test strategies— from backend content crawling, document indexing, API interaction, to UI presentation and search experience . You'll work closely with cross-functional teams including backend engineers, frontend developers, data engineers, DevOps, and product owners , ensuring that all components of the system—from data ingestion (via Java-based crawlers and S3 document pipelines) to frontend search display (built on React and GraphQL)—function seamlessly and perform reliably under real-world loads. In this role, you are expected to be a quality champion , not just ensuring functional correctness but also owning performance, usability, and scalability aspects of search testing. You’ll be at the intersection of search technology , cloud platforms , and UI/UX , driving excellence through hands-on implementation and strategic leadership. Roles & Responsibilities: Hands-On Testing & Automation Design, implement, and maintain comprehensive test strategies across UI, backend, and data layers of search-driven platforms. Perform hands-on testing of React-based UIs integrated with GraphQL APIs, ensuring a seamless and accurate search experience for end-users. Develop and maintain automated test suites using tools like Cypress, Playwright, or Selenium, integrated into CI/CD pipelines. Create robust GraphQL API test scenarios to validate search results, metadata mapping, and performance under various data loads. Search Engine & Data Flow Testing Validate integration of custom search engines (e.g., GCP Search Engine) with frontend interfaces. Test and ensure end-to-end search result accuracy—from Java-based web crawlers, S3 document ingestion, through to frontend UI. Verify the ingestion, parsing, indexing, and retrieval accuracy of documents stored in Amazon S3, including testing of content structure, metadata extraction, and search visibility. Collaborate with developers to test the effectiveness and coverage of Java crawlers, including content freshness, crawl depth, and data completeness. Technical Leadership, Strategy & Team Collaboration Define and drive the overall QA and testing strategy for UI and search-related components with a focus on scalability, reliability, and performance. Contribute to system architecture and design discussions , bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives , introducing best practices and frameworks that align with modern DevOps and CI/CD environments. Mentor and guide QA engineers , fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Monitoring, Metrics & Continuous Improvement Define and track key quality metrics such as search accuracy, indexing delays, UI responsiveness, and test coverage. Drive continuous improvement initiatives in testing practices, tools, and frameworks. Participate in production validations, incident reviews, and apply learnings to build more resilient systems. Quality Monitoring & Continuous Improvement Define and track key quality metrics such as search accuracy, UI responsiveness, indexing delays, and automation coverage to ensure high product quality. Drive continuous improvement initiatives by identifying process gaps, enhancing test tools, and evolving testing strategies based on production feedback. Ensure robust release readiness by conducting risk assessments, regression testing, and cross-functional validation across the release cycle. Collaborate with DevOps to maintain reliable CI/CD pipelines that support automated testing, fast feedback, and post-release monitoring. Good-to-Have Skills: Familiarity with distributed systems, databases, and large-scale system architectures. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops knowledge of search-related programming and algorithms. Experience working with agile Testing methodologies such as Scaled Agile. Must-Have Skills: 10–14 years of QA experience with a strong focus on frontend, backend, and data-centric application testing. Hands-on experience with UI testing of modern frontend applications built in React.js. Strong knowledge of GraphQL APIs — including schema validation, query testing, and performance benchmarking. Proven experience testing custom search engine implementations, preferably on Google Cloud Platform (GCP) or similar. Deep understanding of document ingestion pipelines and metadata validation using Amazon S3 or other object stores. Familiarity with Java-based web crawlers (e.g., Apache Nutch or in-house frameworks) testing content coverage, freshness, and crawl performance. Proficiency in test automation tools such as Cypress, Playwright, or Selenium — including scripting and CI/CD integration. Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI for integrating test automation into release pipelines. Strong skills in debugging, log analysis, and issue triaging across distributed systems. Excellent communication skills with the ability to collaborate cross-functionally and lead QA efforts within agile teams. Education and Professional Certifications Bachelor’s degree in computer science and engineering preferred, other Engineering field is considered; Master’s degree and 6+ years’ experience Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Mohali

On-site

Job Description Job Title: Chief AI Officer (CAIO) Location: Mohali Reports To: CEO Exp: 10+ Years About RChilli: RChilli is a global leader in HR Tech, delivering AI-driven solutions for resume parsing, data enrichment, and talent acquisition. We are looking for a visionary Chief AI Officer (CAIO) to drive AI strategy, innovation, and ethical AI deployment in HRTech. Key Responsibilities: AI Strategy & Leadership Develop and execute RChilli’s AI strategy aligned with business goals. Ensure ethical AI implementation and compliance with industry regulations. Be a change leader in adopting AI across the company. AI-Driven Product Innovation Lead AI research & development for NLP, machine learning, and predictive analytics. Implement AI for automated job descriptions, resume scoring, and candidate recommendations. Oversee AI-powered chatbots, workforce planning, and predictive retention models. Identify opportunities for AI implementation, including: Automated calls for candidate screening, interview scheduling, and feedback collection. AI-powered report generation for HR analytics, performance tracking, and compliance. AI-based note-taking and meeting summarization for enhanced productivity. Technology & Infrastructure Define and implement a scalable AI roadmap. Manage AI infrastructure, data lakes, ETL processes, and automation. Oversee data lakes and ETL tools such as Airflow and NiFi for efficient data management. Ensure robust data engineering and analysis frameworks. Generative AI, Conversational AI & Transformative AI Apply Generative AI for automating job descriptions, resume parsing, and intelligent recommendations. Leverage Conversational AI for chatbots, virtual assistants, and AI-driven HR queries. Utilize Transformative AI for workforce planning, sentiment analysis, and predictive retention models. Tool Identification & Implementation Identify business requirements and assess third-party AI tools available in the market. Implement and integrate AI tools to enhance operations and optimize business processes. Business Integration & Operations Collaborate with cross-functional teams to integrate AI into HRTech solutions. Understand and optimize business processes for AI adoption. Align AI-driven processes with business efficiency and customer needs. Leadership & Talent Development Build and mentor an AI team, fostering a culture of innovation. Promote AI literacy across the organization. Industry Thought Leadership Represent RChilli in AI forums, conferences, and industry partnerships. Stay ahead of AI trends and HRTech advancements. Required Skills & Qualifications: Technical Skills: Master’s/Ph.D. in Computer Science, AI, Data Science, or related field. 10+ years of experience in AI/ML, with 5+ years in leadership roles. Expertise in NLP, machine learning, deep learning, and predictive analytics. Experience in AI ethics, governance, and compliance frameworks. Strong proficiency in AI infrastructure, data engineering, and automation tools. Understanding of data lakes, ETL processes, Airflow, and NiFi tools. Clear concepts in data engineering and analysis. Leadership & Business Skills: Strategic thinker with the ability to align AI innovation with business goals. Excellent communication and stakeholder management skills. Experience in building and leading AI teams. Why Join RChilli? Lead AI Innovation: Shape AI-driven HR solutions in a globally recognized HRTech company. Impactful Work: Drive AI transformations in HR operations and talent acquisition. Growth & Learning: Work with a passionate AI research and product team. Competitive Package: Enjoy a competitive salary, benefits, and career growth opportunities. If you are a visionary AI leader ready to transform HRTech, join RChilli as our Chief AI Officer.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a Data Engineer with ETL/ELT expertise for our growing data platform and analytics teams, you will understand and enable the required data sets from different sources. This includes both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements. Transforming technical design. Working on data ingestion, preparation and transformation. Developing the scripts for data sourcing and parsing. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You'll Need To Have Bachelor’s degree or one or more years of experience. Experience with Data Warehouse concepts and Data Management life cycle. Even better if you have one or more of the following: Any related Certification on ETL/ELT developer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influencing partners. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gāndhīnagar

On-site

Experience - 3 to 5 Years Location : GIFT CITY, Gandhinagar Qualification : B.Tech or B.E, B.C.A prior to M.C.A Requirements : Develop and Manage Mobile Application in Flutter – Android / iOS Platform Mobile app design and optimization, able to provide a solution to improve mobile app quality, understand emerging technologies, standards and best practices to bring mobile app product to the next level Continuously discover, evaluate, and implement new technologies and frameworks to maximize development efficiency Communicate regularly and write clean code. Review, analyze and resolve application issues as needed. Collaborate with team members on the design and implementation of new functionalities on the different platforms. Must have work experience with Flutter SDK and Dart programming language with Java and/or Swift. Strong knowledge on Flutter widgets like Cupertino for iOS and Material Components for Android. Knowledge of State Management (Bloc or Provider). Experience working on mobile platforms like Android/iOS is required. Experience using Web Services, REST API’s and Data parsing using XML, JSON etc. Strong unit test and debugging skills. Good knowledge of the SQLite database and the Google Play Services like Push Notifications. Strong mobile UI design (multi-screen resolutions), coding, support and maintenance. Be proficient in using version control and continuous integration, with tools such as Git. Knowledge on play-store publishing & distribution. Shift : 10:00 AM to 6:30 PM

Posted 1 month ago

Apply

1.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Wissen Technology is Hiring for Python Developer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are looking for talented and motivated Python Backend Developers to join our growing team. The ideal candidate will have a solid foundation in backend development using Python and related technologies, along with a good understanding of software design principles and development methodologies. Experience: 1-8 Years Location: Mumbai Educational Qualification: Candidates from Tier 1 or Tier 2 Institutes only Key Responsibilities Design, develop, and maintain scalable backend services using Python (Django and related frameworks) Implement and optimize algorithms , data structures , and object-oriented solutions Write clean, maintainable code and robust unit tests using PyTest Parse and process structured data, including XML and file-based input/output operations Collaborate with database engineers to develop optimized SQL queries, procedures , and performance tuning Apply Service Design principles and contribute to architectural discussions Participate in Agile development processes, including sprint planning, code reviews, and daily stand-ups Work closely with cross-functional teams and communicate effectively with technical and non-technical stakeholders Required Skills: Strong backend development experience with Python Hands-on experience with Django , NumPy , and standard Python libraries In-depth understanding of Data Structures , OOPs , and Algorithms Proficient in SQL and database performance tuning Experience with XML parsing and file handling in Python Knowledge of unit testing frameworks , especially PyTest Familiarity with service design , object-oriented , and functional programming concepts Experience working in Agile environments Excellent communication skills , both verbal and written Strong interpersonal skills and a professional, team-oriented approach The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process:https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description Role & Responsibilities : Design and develop ETL workflows and Datasets/Datamarts in Alteryx to be used for visualization. Solid, deep experience with data extract, transform and load (ETL) tools as Alteryx and high-level knowledge on data visualization tools (Pref Tableau/Power BI). Prepare technical specifications and documentation for Alteryx workflows. ETL specific for creating datasets (to be used by Tableau Data Extract for BI) Write Complex SQL queries on multiple tables using complex joins. Perform end to end Data validation. Ability to interact with business users and understand their requirements. Develop impactful presentations and documents. Communicate complex topics to team through both written and oral communications. Ensure data is stored effectively and provide the ability to maintain the data analytics platform, so that data can effectively be mined for information later. Conduct unit tests and develop database queries to analyse the effects and troubleshoot any issues that arise. Evaluate and improve existing systems that are in place as well as collaborating with teams within the business to integrate these new systems to streamline workflows. Develop and update technical documentation for senior leaders and colleagues to serve as a reference guide. Understand if there is technical or business limitation to implement technology control/configuration. Understand and document the compensating control for managing or mitigating security risk that might exist due to the technical or business limitation. Provide recommendations to strengthen current processes, controls pertaining to technology platforms. Provide regular updates on assign tasks to team members. Technical Mind Set Alteryx (Mandatory) 2+ years of related experience with Alteryx Hands-on skills and experience in Alteryx Designer, Alteryx Scheduler, Alteryx Server (good to have), and the tools with Alteryx such as Predictive, Parsing, Transforms, Interface (good to have) Experience with data ETL (SQL, Alteryx) and ability to manipulate data and draw insights from large data sets. 2+ years writing SQL queries against any RDBMS with query optimization. Strong analytical, problem solving, and troubleshooting abilities. Good understanding of unit testing, software change management, and software release management Good understanding the star schema and data models in the existing data warehouse. (Desirable) Tool Knowledge Python Experience will a definite plus Any other ETL experience will be definite plus. Essential Strong communication skills and analytics Self-Starter and able to self-manage. Ability to prepare accurate reports for all levels of staff in a language and tone appropriate to the audience. Good team player, ability to work on a local, regional, and global basis. Able to perform under pressure. Ability to drive to succeed and go beyond the extra mil (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title : Java Backend developer- SaaS Industry Location : Noida, Sector 62 Experience : 3+ Years (Seeking Immediate Joiners!) Type : Full-Time About The Role We are seeking a skilled Java Backend Developer to join our engineering team. You will be responsible for developing and maintaining scalable backend services and APIs for high-performance applications. The role requires hands-on experience with Java, Spring Boot, and MySQL, as well as strong problem-solving abilities and a proactive attitude. Key Responsibilities Develop, test, and maintain RESTful APIs using Java and Spring Boot Design database schemas and write optimized SQL queries for MySQL Ensure code quality by writing unit and integration tests (JUnit, Mockito) Collaborate with frontend, DevOps, and QA teams to deliver robust features Identify and troubleshoot performance and scalability issues in production Adhere to Git-based version control and agile development processes Write clean, maintainable, and well-documented code Required Skills Strong knowledge of Core Java (Java 8 or above) Experience with Spring Boot and REST API development Proficiency in MySQL (experience with AWS RDS is a plus) Solid understanding of JSON parsing and data transformation Experience in logging and exception handling (SLF4J, Logback) Familiarity with unit testing and integration testing frameworks We need someone who can join immediately or with a very short notice period. If you have a notice period of 30 days or more, PLEASE DO NOT APPLY. What We Offer Competitive salary and performance-based bonuses. Opportunities for professional growth and learning in the SaaS domain (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

4.0 - 5.0 years

0 Lacs

Andaman and Nicobar Islands, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description Job Summary As an OT Cybersecurity Data Engineer, you will manage the design, implementation, and testing of our Security Information and Event Management (SIEM) system with a specific focus on integrating and analyzing data from critical OT/ICS environments. You will work with cybersecurity teams to ensure the monitoring, detection, and reporting of security threats within industrial infrastructure. We are looking for a understanding of SIEM and SOAR technologies, OT protocols, and cybersecurity best practices. You will report to the Cyber Team Leader and have a hybrid schedule working in Hinjewadi-Pune. Your Responsibilities Develop SIEM and SOAR solutions tailored for OT environments, considering the unique challenges and protocols involved. Integrate multiple OT data sources (e.g., IDS, EDR, control system logs, network traffic from industrial protocols) into the SIEM platform. Maintain custom parsers, normalizers, and correlation rules to analyze OT-specific logs and events within the SIEM. Collaborate with OT operations and engineering teams to understand their systems, data sources, and security monitoring requirements. Configure and improve the SIEM platform for performance, scalability, and stability in an OT context. Maintain OT-focused dashboards and reports within the SIEM to provide actionable insights into security posture and potential threats. Tune and optimize SIEM rules and alerts to minimize false positives and ensure high-fidelity detection of OT security incidents. Maintain documentation for the OT SIEM architecture, data sources, rules, and operational procedures. Recommend new SIEM features, integrations, and related security technologies for enhancing OT security monitoring. The Essentials - You Will Have Have 4-5years of demonstrated experience working with SIEM platforms (e.g., Sumo Logic, Palo Alto Cortex XSOAR) and a understanding of their architecture, configuration, and rule development. Understanding of OT protocols (e.g., Modbus, DNP3, IEC 61850), industrial control systems (e.g., PLC, SCADA, DCS), and their logging mechanisms. Experience parsing and normalising complex log formats, including those specific to OT devices and applications and, in the context of security event analysis, technical information to both technical and non-technical audiences and as part of a team in a environment. Specific experience integrating OT data sources with enterprise SIEM platforms. Knowledge of security frameworks and standards relevant to OT (e.g., NIST SP 800- 82, IEC 62443). Experience with scripting languages (e.g., Python, PowerShell) for SIEM automation and data manipulation. Relevant certifications such as GICSP, GRID, CISSP, or SIEM-specific certifications. Familiarity with threat intelligence platforms and their integration with SIEM for OT threat detection. The Preferred - You Might Also Have You will have to understand relevant evolving technology, understand complex technology dependency and working across a range of service offerings that may leverage a wide array of technologies and partners. Develop key product & service launches Collaborative culture across the automation engineering team while meeting C&I objectives Adopt technology best practices around technology & vendor evaluation and managing & maintenance of technology platforms. What We Offer Our benefits package includes … Comprehensive mindfulness programmes with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programmes through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Show more Show less

Posted 1 month ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description Job Summary As an OT Cybersecurity Data Engineer, you will manage the design, implementation, and testing of our Security Information and Event Management (SIEM) system with a specific focus on integrating and analyzing data from critical OT/ICS environments. You will work with cybersecurity teams to ensure the monitoring, detection, and reporting of security threats within industrial infrastructure. We are looking for a understanding of SIEM and SOAR technologies, OT protocols, and cybersecurity best practices. You will report to the Cyber Team Leader and have a hybrid schedule working in Hinjewadi-Pune. Your Responsibilities Develop SIEM and SOAR solutions tailored for OT environments, considering the unique challenges and protocols involved. Integrate multiple OT data sources (e.g., IDS, EDR, control system logs, network traffic from industrial protocols) into the SIEM platform. Maintain custom parsers, normalizers, and correlation rules to analyze OT-specific logs and events within the SIEM. Collaborate with OT operations and engineering teams to understand their systems, data sources, and security monitoring requirements. Configure and improve the SIEM platform for performance, scalability, and stability in an OT context. Maintain OT-focused dashboards and reports within the SIEM to provide actionable insights into security posture and potential threats. Tune and optimize SIEM rules and alerts to minimize false positives and ensure high-fidelity detection of OT security incidents. Maintain documentation for the OT SIEM architecture, data sources, rules, and operational procedures. Recommend new SIEM features, integrations, and related security technologies for enhancing OT security monitoring. The Essentials - You Will Have Have 4-5years of demonstrated experience working with SIEM platforms (e.g., Sumo Logic, Palo Alto Cortex XSOAR) and a understanding of their architecture, configuration, and rule development. Understanding of OT protocols (e.g., Modbus, DNP3, IEC 61850), industrial control systems (e.g., PLC, SCADA, DCS), and their logging mechanisms. Experience parsing and normalising complex log formats, including those specific to OT devices and applications and, in the context of security event analysis, technical information to both technical and non-technical audiences and as part of a team in a environment. Specific experience integrating OT data sources with enterprise SIEM platforms. Knowledge of security frameworks and standards relevant to OT (e.g., NIST SP 800- 82, IEC 62443). Experience with scripting languages (e.g., Python, PowerShell) for SIEM automation and data manipulation. Relevant certifications such as GICSP, GRID, CISSP, or SIEM-specific certifications. Familiarity with threat intelligence platforms and their integration with SIEM for OT threat detection. The Preferred - You Might Also Have You will have to understand relevant evolving technology, understand complex technology dependency and working across a range of service offerings that may leverage a wide array of technologies and partners. Develop key product & service launches Collaborative culture across the automation engineering team while meeting C&I objectives Adopt technology best practices around technology & vendor evaluation and managing & maintenance of technology platforms. What We Offer Our benefits package includes … Comprehensive mindfulness programmes with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programmes through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Mathematics & Statistics: Advanced knowledge of probability, statistics and linear algebra. Expertise in statistical modelling, hypothesis testing and experimental design. Machine Learning and AI: 4+ years of hands-on experience with GenAI application with RAG approach, Vector databases, LLM’s. Hands on experience with LLMs (Google Gemini, Open AI, Llama etc.), LangChain, LlamaIndex, LlamaIndex for context-augmented generative AI, and Hugging Face Transformers, Knowledge graph, and Vector Databases. Advanced knowledge of RAG techniques is required, including expertise in hybrid search methods, multi-vector retrieval, Hypothetical Document Embeddings (HyDE), self-querying, query expansion, re-ranking, and relevance filtering etc. Strong Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, scikit-learn, Scipy, Pandas and high-level APIs like Keras is essential. Advanced NLP skills, including Named Entity Recognition (NER), Dependency Parsing, Text Classification, and Topic Modeling. In-depth experience with supervised, unsupervised and reinforcement learning algorithms. Proficiency with machine learning libraries and frameworks (e.g. scikit-learn, TensorFlow, PyTorch etc.) Knowledge of deep learning, natural language processing (NLP). Hands-on experience with Feature Engineering, Exploratory Data Analysis. Familiarity and experience with Explainable AI, Model monitoring, Data/ Model Drift. Proficiency in programming languages such as Python. Experience with relational (SQL) and Vector databases. Skilled in Data wrangling, cleaning and preprocessing large datasets. Experience with natural language processing (NLP) and natural language generation (NLG). ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Description Job Title: Chief AI Officer (CAIO) Location: Mohali Reports To: CEO Exp: 10+ Years About RChilli RChilli is a global leader in HR Tech, delivering AI-driven solutions for resume parsing, data enrichment, and talent acquisition. We are looking for a visionary Chief AI Officer (CAIO) to drive AI strategy, innovation, and ethical AI deployment in HRTech. Key Responsibilities AI Strategy & Leadership Develop and execute RChilli’s AI strategy aligned with business goals. Ensure ethical AI implementation and compliance with industry regulations. Be a change leader in adopting AI across the company. AI-Driven Product Innovation Lead AI research & development for NLP, machine learning, and predictive analytics. Implement AI for automated job descriptions, resume scoring, and candidate recommendations. Oversee AI-powered chatbots, workforce planning, and predictive retention models. Identify opportunities for AI implementation, including: Automated calls for candidate screening, interview scheduling, and feedback collection. AI-powered report generation for HR analytics, performance tracking, and compliance. AI-based note-taking and meeting summarization for enhanced productivity. Technology & Infrastructure Define and implement a scalable AI roadmap. Manage AI infrastructure, data lakes, ETL processes, and automation. Oversee data lakes and ETL tools such as Airflow and NiFi for efficient data management. Ensure robust data engineering and analysis frameworks. Generative AI, Conversational AI & Transformative AI Apply Generative AI for automating job descriptions, resume parsing, and intelligent recommendations. Leverage Conversational AI for chatbots, virtual assistants, and AI-driven HR queries. Utilize Transformative AI for workforce planning, sentiment analysis, and predictive retention models. Tool Identification & Implementation Identify business requirements and assess third-party AI tools available in the market. Implement and integrate AI tools to enhance operations and optimize business processes. Business Integration & Operations Collaborate with cross-functional teams to integrate AI into HRTech solutions. Understand and optimize business processes for AI adoption. Align AI-driven processes with business efficiency and customer needs. Leadership & Talent Development Build and mentor an AI team, fostering a culture of innovation. Promote AI literacy across the organization. Industry Thought Leadership Represent RChilli in AI forums, conferences, and industry partnerships. Stay ahead of AI trends and HRTech advancements. Technical Skills Required Skills & Qualifications: Master’s/Ph.D. in Computer Science, AI, Data Science, or related field. 10+ years of experience in AI/ML, with 5+ years in leadership roles. Expertise in NLP, machine learning, deep learning, and predictive analytics. Experience in AI ethics, governance, and compliance frameworks. Strong proficiency in AI infrastructure, data engineering, and automation tools. Understanding of data lakes, ETL processes, Airflow, and NiFi tools. Clear concepts in data engineering and analysis. Leadership & Business Skills Strategic thinker with the ability to align AI innovation with business goals. Excellent communication and stakeholder management skills. Experience in building and leading AI teams. Why Join RChilli? Lead AI Innovation: Shape AI-driven HR solutions in a globally recognized HRTech company. Impactful Work: Drive AI transformations in HR operations and talent acquisition. Growth & Learning: Work with a passionate AI research and product team. Competitive Package: Enjoy a competitive salary, benefits, and career growth opportunities. If you are a visionary AI leader ready to transform HRTech, join RChilli as our Chief AI Officer. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad

Remote

Job description Job Role : SentinelOne Migration Engineer /SIEM Engineer--Work From Home Experience : 5 to 11 Yrs Key Skills: SIEM Administration, SIEM Implementation, SIEM Migration, Integration Notice Period : 0 to 30 days Mode of Work : Remote( 06:00 PM to 03:00 AM IST) Should be willing to work in Second shift Company: Cyber Towers, Quadrant 3, 3rd floor, Madhapur, Hyderabad -- 500081. Job Overview: We are seeking a talented and highly motivated SentinelOne Migration SIEM Engineer to join our Dedicated Defense group. As a key member of our team, you will be responsible for deploying and maintaining SentinelOne's AI SIEM to enhance threat detection, response, and overall security posture. This is an exciting opportunity for an individual with expertise in SIEM technologies, aiming to help safeguard critical systems and data from evolving cyber threats. Responsibilities: Integration & Optimization: Integrate and optimize SentinelOne AI SIEM to improve visibility and automate threat detection workflows. Threat Detection: Utilize SentinelOnes AI-powered analytics to dashboard reports and automate critical reporting functions Automation & Playbook Development: Develop automated detection and response playbooks based on SentinelOne data feeds, streamlining incident management and reducing time to resolution. Collaboration & Knowledge Sharing: Work closely with other security and IT teams to share threat intelligence, optimize SIEM use, and contribute to security strategy development. Reporting & Documentation: Develop and maintain dashboards, reports, and documentation related to SentinelOne deployment, performance, and incident metrics. Continuous Improvement: Continuously evaluate SentinelOne's capabilities and other relevant security tools to recommend improvements and refine detection capabilities. Required Qualifications: Bachelors degree in Computer Science, Information Security, or a related field (or equivalent experience). 1+ year of experience working with SentinelOne AI SIEM Hands-on experience with other SIEM platforms (Splunk, IBM QRadar, Microsoft Sentinel, etc.) and integrating them with endpoint security tools. Strong understanding of cybersecurity principles, threat detection, and SIEM management. Proficiency in scripting and automation (Python, PowerShell, etc.). Experience with cloud security (AWS, Azure, GCP) and cloud-native SIEM solutions is a plus. Preferred Qualifications: SentinelOne certification (or equivalent industry certifications). Knowledge of compliance frameworks (e.g., NIST, ISO 27001, GDPR, etc.) and how they apply to security operations. Key Skills: Technical Skills: SentinelOne platform, SIEM tools, security automation, machine learning for cybersecurity, network security. Analytical Skills: Strong ability to analyze large datasets and correlate logs/events. Communication Skills: Excellent verbal and written communication skills for collaborating with cross-functional teams and providing clear reporting. Problem-Solving: Strong troubleshooting skills with the ability to resolve complex security issues quickly and effectively.

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

On-site

Responsibilities Drive execution of AI/ML product implementations for enterprise customers, focusing on contract lifecycle management and prompt engineering, following the company’s implementation methodology. Create and refine prompts to optimize AI models for contract management solutions, including document analysis, keyword identification, and scenario-based query responses. Assist customers in designing CLM platform configurations to align with their business needs and industry best practices. Provide quality assurance and support to ensure the accuracy and efficiency of AI model outcomes in contract management workflows. Act as a subject matter expert on contracting, legal negotiations, and CLM solutions to help customers make informed decisions. Contribute to the development of internal consulting methodologies and provide feedback to enhance product roadmaps based on customer interactions and implementation experiences. Have customer conversations about value proposition, implementation benefits, and product stickiness. Effective stakeholder management. Qualifications Educational Qualification: Degree in Law or LLB or specialization in Corporate Law. Experience: 1–2 years of relevant experience in contract drafting, contract negotiations, or legal consultancy. Demonstrated interest or experience in prompt writing, with prior examples or applications. Ability to conceptualize and formulate prompts to effectively address legal and contractual objectives. Strong analytical skills, with experience in large data sets, text parsing, and identifying trends in contract documents. Excellent communication and interpersonal skills, with the ability to foster peer-to-peer relationships and build customer trust. Proactive problem-solving mindset with the ability to manage multiple engagements simultaneously. Good To Have Knowledge about US Laws Has shown ability to write advanced prompts to garner output from leading LLMs In-depth knowledge of contract management, contract lifecycle management (CLM) platforms, and industry-specific contracting workflows. Knowledge in any programming language or Excel macros is a plus About Us With unmatched technology and category-defining innovation, Icertis pushes the boundaries of what’s possible with contract lifecycle management (CLM). The AI-powered, analyst-validated Icertis Contract Intelligence (ICI) platform turns contracts from static documents into strategic advantage by structuring and connecting the critical contract information that defines how an organization runs. Today, the world’s most iconic brands and disruptive innovators trust Icertis to fully realize the intent of their combined 10 million contracts worth more than $1 trillion, in 40+ languages and 93 countries. About The Team Who we a re: Icertis is the only contract intelligence platform companies trust to keep them out in front, now and in the future. Our unwavering commitment to contract intelligence is grounded in our FORTE values—Fairness, Openness, Respect, Teamwork and Execution—which guide all our interactions with employees, customers, partners, and stakeholders. Because in our mission to be the contract intelligence platform of the world, we believe how we get there is as important as the destination. Icertis, Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Icertis, Inc. complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to careers@icertis.com or get in touch with your recruiter. Show more Show less

Posted 1 month ago

Apply

5.0 years

4 - 8 Lacs

Cochin

On-site

Role –AIML Gen AI Engineer Location: Chennai/ Bangalore/ Hyderabad Experience: 5+ Years Job Description: Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients

Posted 1 month ago

Apply

5.0 years

4 - 9 Lacs

Hyderābād

Remote

About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today's fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com. Why join us? Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That's why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, feel free to check out why: Business Insider named us an "enterprise startup to bet your career on" Forbes' Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an experienced AI Solutions Engineer to join our AI Solutions team, with a strong background in engineering and web development. In this role, you will be responsible for delivering a truly exceptional customer experience, as well as educating and supporting our customers on the AgentX Support product. This is a hands-on, highly technical role—much broader than a typical "click-and-configure" position. You will be directly responsible for helping customers integrate Workato into their products, build flows, diagnose and report issues, and serve as the bridge between our customers and our product teams. Our work begins the moment a customer decides to use AgentX Support, and we stay with them every step of the way to ensure they get the most value from our product. In this role, you will also be responsible to: Design and implement AI-powered customer support automation solutions that reduce resolution times and improve customer satisfaction Develop intelligent ticket routing and classification systems to ensure customer issues reach the right agent faster Build conversational AI agents capable of handling common customer inquiries without human intervention Create analytics dashboards to measure and optimize the effectiveness of support automation solutions Continuously monitor and enhance system performance to ensure efficiency, reliability, and scalability Take ownership of customer communications and issues from initiation to resolution, delivering an outstanding customer experience Use strong communication skills to explain technically complex ideas to non-technical audiences Collaborate with the Support team to ensure an exceptional customer experience by making the product as easy to use, reliable, bug-free, and responsive as possible Troubleshoot and debug complex issues, understanding both our own codebase and the diverse technologies used by customers Create and deliver custom product demonstrations to support the Sales team and other internal stakeholders Enhance internal processes and promote teamwide knowledge sharing by contributing to the internal knowledge base Play a key role throughout the product development lifecycle, from ideation to implementation Support the Product Manager in crafting technical and design specifications for new features and improvements Requirements Please note: In this role, you will be supporting the EMEA/US business hours from 2 pm to 11 pm IST! Qualifications / Experience / Technical Skills B.Tech/B.E. or higher in Computer Science, Artificial Intelligence, Machine Learning, or a related technical field 5+ years of relevant experience in the design, development, and implementation of AI-driven solutions Proven experience in AI engineering, with a strong focus on agent-based systems Strong knowledge of JavaScript, DOM manipulation, and browser developer tools for front-end automation Experience working with WebSockets for implementing real-time communication in support interfaces Ability to develop custom web scraping solutions to extract structured data from various sources Solid understanding of anti-scraping techniques and experience with HTML parsing libraries 2–3 years of hands-on coding experience in Python and/or JavaScript Experience with customer support platforms such as Zendesk, Intercom, Freshdesk, or ServiceNow Demonstrated success implementing conversational AI for customer-facing applications Strong understanding of intent classification and entity extraction techniques for support queries Experience with support ticket analytics and automated response systems Familiarity with omnichannel support integration (chat, email, voice, social media) Understanding of key customer support metrics (CSAT, NPS, First Contact Resolution) and strategies to optimize them through automation Soft Skills / Personal Characteristics Strong collaboration skills, ability to adapt to a dynamic start-up environment, with a passion for making an impact Strong critical thinking, analytical skills, with an entrepreneurial and proactive mindset Ability to effectively prioritize tasks and manage time, even under high-pressure situations Strong written and oral communication skills in English, with the ability to convey complex technical concepts effectively to a non-technical audience Fast learner who can independently conduct extensive research, and synthesize ideas, information and options quickly Be proactive about solving problems and be ready to take on additional initiatives and responsibilities as they emerge To stand out in the hiring process, please take the time to respond to the Job Application Questions below with concise yet informative answers. All submissions are personally reviewed by the Hiring Team, not evaluated by AI.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai

On-site

- 3+ years of building models for business application experience - PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience - Experience in patents or publications at top-tier peer-reviewed conferences or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Do you want to use your expertise in translating innovative science into impactful products to develop a new business line in International emerging stores. If you do, International Emerging Stores Payments team would love to talk to you about how to make that a reality. As an applied scientist on our team, you will work with business leaders, scientists, and engineers to translate business and functional requirements into concrete deliverables and define the execution roadmap. You will partner with scientists, and engineers on the design, development, testing, and deployment of scalable ML models. This is a unique, high visibility opportunity for someone who wants to have impact, dive deep into large-scale solutions, enable measurable actions on the employee experience, and work closely with scientists and economists. This role combines science leadership and technical strength. Key job responsibilities - As an Applied Scientist, ML Applications, you will: - Lead applied scientists to deliver machine-learning and AI solutions to production. - Design, develop, and evaluate innovative machine learning solutions to solve diverse challenges and opportunities for Amazon customers - Advance the team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. - Partner with the engineering team to deploy your models in production. - Partner with scientists from across ML teams within India Consumer Payments to solve complex problems. - Work directly with Amazonians from across the company to understand their business problems and help define and implement scalable ML solutions to solve them. - Mentor and develop junior scientists and developers. Experience building machine learning models or developing algorithms for business application Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies