Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Description: Marketing & Sales Intern Company: RChilli Location: Mohali, India Type: Internship (Full-time, Flexible Shifts) Start Date: Immediate | Stipend: As per industry standards 🌟 About RChilli RChilli is a global leader in AI-powered HR Tech solutions , helping companies worldwide simplify hiring with resume parsing, job matching, and data enrichment tools . Join us and be part of a team that works on cutting-edge tech, directly aligned with our CEO's vision , with opportunities to represent RChilli at international HR Tech events . 🎯 Who We're Looking For MBA in Marketing & Sales (preferred) Freshers (0–1 year) with 3–6 months internship/training Strong communication (verbal & written) Exposure to AI tools (ChatGPT, Canva, CRM, etc.) High logical reasoning & professional ethics Immediate joiners preferred 🔧 Responsibilities Support sales campaigns, lead generation & customer engagement Conduct market research & competitor analysis Create digital & written marketing content Collaborate with cross-functional teams on live projects Use AI tools to enhance productivity 🚀 What You'll Gain Hands-on experience in AI-driven HR Tech Work directly on CEO-led projects Chance to travel abroad for HR Tech events (top performers) Internship Certificate + LOR + potential PPO
Posted 15 hours ago
0.0 years
0 Lacs
Mohali
On-site
Job Description: Marketing & Sales Intern Company: RChilli Location: Mohali, India Type: Internship (Full-time, Flexible Shifts) Start Date: Immediate | Stipend: As per industry standards 🌟 About RChilli RChilli is a global leader in AI-powered HR Tech solutions , helping companies worldwide simplify hiring with resume parsing, job matching, and data enrichment tools . Join us and be part of a team that works on cutting-edge tech, directly aligned with our CEO’s vision , with opportunities to represent RChilli at international HR Tech events . 🎯 Who We’re Looking For MBA in Marketing & Sales (preferred) Freshers (0–1 year) with 3–6 months internship/training Strong communication (verbal & written) Exposure to AI tools (ChatGPT, Canva, CRM, etc.) High logical reasoning & professional ethics Immediate joiners preferred 🔧 Responsibilities Support sales campaigns, lead generation & customer engagement Conduct market research & competitor analysis Create digital & written marketing content Collaborate with cross-functional teams on live projects Use AI tools to enhance productivity 🚀 What You’ll Gain Hands-on experience in AI-driven HR Tech Work directly on CEO-led projects Chance to travel abroad for HR Tech events (top performers) Internship Certificate + LOR + potential PPO
Posted 16 hours ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 321622BR Job Type Full Time Your role Are you an analytical thinker with experience in big data? Do you excel at developing innovative solutions? We are looking for a Data Developer with practical knowledge of Python and expertise in Semantic Web technologies. You will: Design, prototype, build, and maintain new data pipelines features on our data platform, as well as support existing ones through debugging and optimization. Implement quality assurance and data quality checks to ensure the completeness, validity, consistency, and integrity of data as it flows through the pipeline. Collaborate closely with a global team of researchers, engineers, and business analysts to build innovative data solutions. Your team You will be part of a nimble, multi-disciplinary Data Architecture team within Group CTO, collaborating closely with specialists across various areas of Group Technology. Our team provides the foundation for data-driven technology management, facilitating processes from strategic and architecture planning to demand management, development, and deployment. The team is globally distributed, with members primarily based in Switzerland, the UK, and the US. Your expertise You have: 10+ years of Proven track record in hands-on development and design of data platforms, with a strong emphasis on data ingestion and integration. Interest in linked data and Semantic Web technologies as enablers for data science and machine learning. Strong command of application, data, and infrastructure architecture disciplines. Experience working in agile, delivery-oriented teams. Desired: University degree, preferably in a technical or quantitative field such as statistics, computer science, or mathematics. Strong command of Python; proficiency in other languages (e.g., C++, Java) is desirable. Strong understanding of data and databases (SQL, NoSQL, TripleStores, Hadoop, etc.). Experience with efficient processing of large datasets in a production system. Understanding of data structures and data manipulation techniques, including classification, parsing, and pattern matching. Experience with Machine Learning and Artificial Intelligence is a plus. You are: Willing to take full ownership of problems and code, with the ability to hit the ground running and deliver exceptional solutions. A strong problem solver who anticipates issues and resolves them proactively. Skilled in communicating effectively with both technical and non-technical audiences. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 16 hours ago
15.0 years
0 Lacs
Noida
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required. 15 years full time education
Posted 16 hours ago
8.0 years
0 Lacs
India
Remote
Join phData, a dynamic and innovative leader in the modern data stack. We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions. We're committed to helping global enterprises overcome their toughest data challenges. phData is a remote-first global company with employees based in the United States, Latin America and India. We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust. Even though we're growing extremely fast, we maintain a casual, exciting work environment. We hire top performers and allow you the autonomy to deliver results. 6x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024, 2025) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM As a Staff Software Engineer on the Product Engineering team, you will: Design, implement, and maintain high-quality code to meet project requirements Take ownership of the code that you write: you understand it, you are intentional about the choices you make, and you aggressively hunt down bugs Mentor other engineers with the goal of moving them to the next level Learn from other engineers, even those with less experience Own large features from design to implementation, guiding mid and junior engineers through the lifecycle Lead discussions with the team to brainstorm solutions and address technical issues Resolve user issues with a keen focus on root cause analysis, thinking strategically about incorporating preventative measures into our software, builds, and tests to prevent future issues Influence coding standards and design practices to ensure consistency and quality across projects Qualifications For The Ideal Candidate Experience: 8 to 12+ years. JVM Experience: You are an expert in Java and/or Kotlin, with a deep understanding of the JVM ecosystem. Build Systems: You have experience creating and maintaining custom builds in Gradle or Maven. Maybe you even enjoy it. Problem-Solving: You enjoy solving problems so much that you seek them out, or maybe even fabricate them yourself just so you can solve them. Collaboration and Communication: You are equally capable of explaining your ideas verbally as you are writing them down. When you disagree with others, you raise your concerns and work through the issues constructively. It frustrates you when others do not provide relevant information, so you strive to communicate relevant details to others clearly and effectively. Strong desire to learn and grow: As a team, we are always learning new technologies and challenging ourselves to grow. You need to enjoy learning if you are going to keep up. Proficiency with Large Codebases: Our code base is fairly large, and covers a wide variety of domains. Not only can you navigate large codebases easily, but you also have opinions on how they should be structured to improve developer experience. CI/CD and Automation: Boring work is something you really want to avoid, and your favorite way to avoid it is with automation. Cloud Infrastructure Experience: We deploy our infrastructure in AWS on Linux. You need a solid understanding of cloud infrastructure, troubleshooting techniques, and maybe some architecture experience. If you do not know the right answer, you at least know how to find it. Database Proficiency: Writing code that interacts with databases, writing SQL, and generally working with data should come naturally to you. Algorithms: Algorithms are second nature for you. You know when to sort a collection or when to wait. You know the difference between O(n) and O(n^2) time complexity and why that matters in a hot section of code. Data Structures: Your understanding of data structures is so deep that you can instinctively pick a list, set, map, or some other data structure based on the context, and you are usually right. Trees and graphs do not scare you. You know when and how to use these data structures effectively in real applications. Language Parsing or Compiler Experience: Bonus points if you have experience with custom parsers or compilers, especially if they used ANTLR. phData celebrates diversity and is committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at phData. We are proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics. If you would like to request an accommodation due to a disability, please contact us at People Operations.
Posted 16 hours ago
0 years
0 Lacs
India
Remote
Key Responsibilities ● Monitor production systems and job pipelines; respond promptly to alerts and anomalies ● Troubleshoot operational issues in collaboration with the development team ● Investigate incidents using logs, metrics, and observability tools (e.g., Grafana, Kibana) ● Perform recovery actions such as restarting pods, rerunning jobs, or applying known mitigations ● Operate in Kubernetes environments to inspect, debug, and manage components ● Support deployment activities through post-release validations and basic checks ● Validate data quality and flag anomalies to the relevant engineering teams ● Maintain clear documentation of incidents, actions taken, and resolution outcomes ● Communicate effectively with remote teams for operational handoffs and follow-ups Required Qualifications ● Experience in production operations, system support, or devops roles ● Solid Linux skills (e.g., file system navigation, log analysis, process/network troubleshooting) ● Hands-on experience with Kubernetes and Docker in production environments ● Familiarity with observability tools (e.g., Grafana, Kibana, Prometheus) ● English proficiency for reading, writing, and asynchronous communication ● Strong execution discipline and ability to follow structured operational procedures Preferred Qualifications ● Scripting ability (Python or Shell) for log parsing and automation ● Basic SQL skills for data verification or debugging ● Experience with Hadoop and Flink pipelines for batch and stream processing is a strong plus ● Experience with large-scale distributed data systems or job scheduling frameworks
Posted 16 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings !! We are looking for a skilled Splunk Administrator with hands-on experience in deploying and managing Splunk Enterprise and Splunk Cloud. The ideal candidate should have experience in Splunk Enterprise Security (ES), Splunk UBA, and IT Service Intelligence (ITSI). This role requires strong technical skills, along with the ability to communicate effectively with customers. Roles & Responsibilities: ✅ Splunk Deployment & Administration: Install, configure, and manage Splunk Enterprise and Splunk Cloud. Handle indexers, search heads, forwarders, and clustering. Optimize Splunk performance, storage, and scalability. ✅ Security & Splunk Monitoring Solutions: Implement and manage Splunk Enterprise Security (ES), Splunk UBA, and ITSI. Configure correlation searches, threat intelligence feeds, risk-based alerting (RBA), and dashboards. Troubleshoot security-related issues within Splunk. ✅ Customer Interaction & Troubleshooting: Engage with customers to understand their requirements and provide technical guidance. Troubleshoot and resolve Splunk-related issues, logs ingestion, parsing, and data onboarding. ✅ Splunk Architecture & Implementation: Design, deploy, and optimize Splunk Enterprise and Splunk Cloud environments. Lead end-to-end Splunk implementations, migrations, and upgrades. Manage search head clustering, indexer clustering, and data retention policies. ✅ Security & Observability Solutions: Architect and configure Splunk Enterprise Security (ES), Splunk UBA, and ITSI. Implement risk-based alerting (RBA), custom correlation searches, and advanced analytics. Integrate Splunk with SOAR, cloud platforms (AWS, Azure, GCP), and third-party security tools. ✅ Team Leadership & Customer Engagement: Lead and mentor a team of Splunk Administrators & Engineers. Interact with customers to gather requirements, design solutions, and conduct workshops etc. Review and improve Splunk use cases, dashboards, and data models. ✅ Optimization & Automation: Develop custom scripts (Python, Bash, PowerShell) for automation and orchestration. Tune Splunk performance, search queries, and indexing strategies. Implement best practices for data onboarding, parsing, and CIM compliance. Interested can share their updated resume to gayathri.ramaraj@locuz.com along with the below mentioned details. Current CTC: Expected CTC: Notice Period:
Posted 20 hours ago
8.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Role: Software Architect - Embedded You are an Energetic, Passionate, and Innate Software Technology Leader having excellent knowledge of designing and developing Linux based embedded products and having an 8+ years of experience with at least 4-5 years of experience of technical leadership. You possess very good knowledge of Software Architecture and Design, Design Patterns, OOPS concepts, Data Structures and Algorithms, Messages Queues, Multi-threading applications, Networking concepts and software security. You are competent to design, develop and deliver Software applications and embedded products. Technical Skills Required : - Hands-on experience in C/ C++, Embedded C (Very strong exposure in C Programming concepts). - Linux, Command of Linux OS. - IPC – Inter-Process Communication exposure (Multithreading and Socket Programming). - Working experience or Knowledge with Microprocessors like Arm 7/9, Cortex A8/A15, Qualcomm, Intel, IMX,NXP etc will be a huge plus. - You have sound knowledge and hands-on experience in one or more Technologies/Platform like Socket Programming, Multi-Threading, ONVIF/RTSP, Video codecs H264/H265, Video Parsing of H264/H265, Image processing, Embedded Web Server, BLE, WIFI, RS485. UART, Push Notification (FCM), VoIP (SIP & RTP). - You possess good knowledge and working experience in one or more Tech Stacks/Frameworks like Ffmpeg, Gstreamer, QT/QML, LIVE555, OpenCV(Image Processing), Networking Fundamentals, Basic Linux commands. - You are proficient in at least two or more languages from among C, JAVA, Python, C++, HTML/CSS, JQuery/Javascript. - You take complete ownership of timely product delivery with impeccable software quality. - You have experience in building, leading, and managing multi-engineer project teams. - You have the ability to navigate the teams through fast changing market needs. - You possess strong people leadership skills in growing/nurturing/mentoring the young engineers. - You have a good understanding of JIRA, Confluence, SVN, Fisheye, Crucible, Sonar/Parasoft/LDRA and Nessus/Nexpose.
Posted 21 hours ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB RESPONSIBILITIES: The job entails you to work with our clients and partners to design, define, implement, roll-out, and improve Data Quality that leverage various tools available in the market for example: Informatica IDQ or SAP DQ or SAP MDG or Collibra DQ or Talend DQ or Custom DQ Solution and/or other leading platform for the client’s business benefit. The ideal candidate will be responsible for ensuring the accuracy, completeness, consistency, and reliability of data across systems. You will work closely with data engineers, analysts, and business stakeholders to define and implement data quality frameworks and tools. As part of your role and responsibilities, you will get the opportunity to be involved in the entire business development life-cycle: Meet with business individuals to gather information and analyze existing business processes, determine and document gaps and areas for improvement, prepare requirements documents, functional design documents, etc. To summarize, work with the project stakeholders to identify business needs and gather requirements for the following areas: Data Quality and/or Data Governance or Master Data Follow up of the implementation by conducting training sessions, planning and executing technical and functional transition to support team. Ability to grasp business and technical concepts and transform them into creative, lean, and smart data management solutions. Development and implementation of Data Quality solution in any of the above leading platform-based Enterprise Data Management Solutions o Assess and improve data quality across multiple systems and domains. o Define and implement data quality rules, metrics, and dashboards. o Perform data profiling, cleansing, and validation using industry-standard tools. o Collaborate with data stewards and business units to resolve data issues. o Develop and maintain data quality documentation and standards. o Support data governance initiatives and master data management (MDM). o Recommend and implement data quality tools and automation strategies. o Conduct root cause analysis of data quality issues and propose remediation plans. o Implement/Take advantage of AI to improve/automate Data Quality solution o Leveraging SAP MDG/ECCs experience the candidate is able to deep dive to do root cause analysis for assigned usecases. Also able to work with Azure data lake (via dataBricks) using SQL/Python. o Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to monitor on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective o Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. o Identifies financial impacts of Data Quality Issue. Also is able to identify business benefit (quantitative/qualitative) from a remediation standpoint along with managing implementation timelines. o Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs o Identifies business DQ rules basis which KPIs/Measures are stood up that feed into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated o Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality KPIs/Measures is needed. Also has experience owing and executing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives o Highlights risk/hidden DQ issues to Lead/Manager for further guidance/escalation o Communication skills are important in this role as this is outward facing and focus has to be on clearly articulation messages. o Support designing, building and deployment of data quality dashboards via PowerBI o Determines escalation paths and constructs workflow and alerts which notify process and data owners of unresolved data quality issues o Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) o Works with business functions and projects to create data quality improvement plans o Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made o Supports initiatives which are driving data clean-up of existing data landscape JOB REQUIREMENTS: i. Education or Certifications: Bachelor's / Master's degree in engineering/technology/other related degrees. Relevant Professional level certifications from Informatica or SAP or Collibra or Talend or any other leading platform/tools Relevant certifications from DAMA, EDM Council and CMMI-DMM will be a bonus ii. Work Experience: You have 4-10 years of relevant experience within the Data & Analytics area with major experience around data management areas: ideally in Data Quality (DQ) and/or Data Governance or Master Data using relevant tools You have an in-depth knowledge of Data Quality and Data Governance concepts, approaches, methodologies and tools Client-facing Consulting experience will be considered a plus iii. Technical and Functional Skills: Hands-on experience in any of the above DQ tools in the area of enterprise Data Management preferably in complex and diverse systems environments Exposure to concepts of data quality – data lifecycle, data profiling, data quality remediation(cleansing, parsing, standardization, enrichment using 3 rd party plugins etc.) etc. Strong understanding of data quality best practices, concepts, data quality management frameworks and data quality dimensions/KPIs Deep knowledge on SQL and stored procedure Should have strong knowledge on Master Data, Data Governance, Data Security Prefer to have domain knowledge on SAP Finance modules Good to have hands on experience on AI use cases on Data Quality or Data Management areas Prefer to have the concepts and hands on experience of master data management – matching, merging, creation of golden records for master data entities Strong soft skills like inter-personal, team and communication skills (both verbal and written)
Posted 21 hours ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : Security Information and Event Management (SIEM) Good to have skills : NA Minimum 2 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an L1 SOC Analyst you are the first line of defense in monitoring and triaging security alerts. You will work primarily with Splunk SIEM and Sentinal One EDR to identify potential security incidents, validate alerts, and escalate them according to the defined SOPs. You will ensure real-time visibility and log health while flagging suspicious activity promptly. This role is essential to ensuring timely detection and reduce noise from false positives. Roles & Responsibilities: --Basic Security Knowledge: Understanding of key concepts (malware, phishing, brute force, etc.) -SIEM Familiarity: Exposure to Splunk UI and understanding how to read/query logs -Exposure to CrowdStrike Falcon Console: Ability to view and interpret endpoint alerts -Alert Triage: Ability to differentiate between false positives and real threats -Alert Triage & Investigation: Experience investigating escalated alerts using SIEM or EDR -Hands-on experience with CrowdStrike EDR investigations -Ticketing Systems: Familiarity with platforms like JIRA, ServiceNow, or similar -Basic understanding of cybersecurity fundamentals -Good analytical and triage skills -Basic Scripting: Awareness of PowerShell or Python for log parsing -SOAR Exposure: Familiarity with automated triage workflows -Security Certifications: Security+, Microsoft SC-900, or similar certification -Operating System Basics: Windows and Linux process and file system awareness -Monitor real-time alerts and dashboards in Splunk SIEM -Perform initial triage on alerts and determine severity/priority -Escalate validated security incidents to L2 analysts per defined SOPs -Follow pre-defined SOAR playbooks to document or assist in response -Ensure alert enrichment fields are populated (host info, user details, etc.) -Conduct basic log searches to support alert analysis -Perform daily health checks on log sources and ingestion pipelines -Maintain accurate ticket documentation for each alert handled -Participate in shift handovers and team sync-ups for awareness Professional & Technical Skills: -SIEM: Basic log searching, correlation rule awareness -SOAR: Familiarity with playbook execution- -Security Concepts: Basic understanding of malware, phishing, brute force -Tools: Sentinal One EDR, Splunk SIEM Additional Information: - The candidate should have minimum 2 years of experience in Security Information and Event Management (SIEM). - This position is based at our Gurugram office. - A 15 years full time education is required.
Posted 1 day ago
7.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 7.5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 1 day ago
0 years
0 Lacs
India
On-site
About the Role: We’re looking for an experienced and proactive WordPress developer to join our offshore development team. In this role, you’ll collaborate closely with client stakeholders and internal teams to build, enhance, and maintain enterprise-level WordPress solutions. If you have a strong technical background, a consultative mindset, and enjoy solving complex problems, we’d love to hear from you. Key Responsibilities Actively participate in technical discussions with client teams and contribute ideas to enhance and scale WordPress platforms. Define clear and efficient implementation approaches for new features, custom components, and system integrations. Take ownership of the technical delivery process, ensuring code quality, performance, and maintainability. Proactively engage in addressing technical queries from internal teams and client stakeholders with clarity and collaboration. Set up and manage development, staging, and production environments, including deployment workflows. Drive a culture of clean code, re-usability, and thoughtful architecture across the team. Skills & Experience WordPress Development: Deep experience with custom themes, plugins, and Gutenberg blocks. Strong knowledge of WordPress core architecture and database structure. PHP & Laravel: Solid PHP skills with familiarity in Laravel (or similar frameworks) for complementary back-end needs. Scripting & Tools: Experience with WP-CLI, creating custom scripts, and automating WordPress tasks. API Integration: Proficient in parsing XML and integrating with external APIs using REST and SOAP protocols. Search Integration: Working knowledge of Elastic Search and its integration with WordPress for advanced search capabilities. Infrastructure & Deployment: Comfortable with the LEMP stack (Linux, Nginx, MySQL, PHP) and experience setting up CI/CD pipelines for deployments. Communication & Leadership: Strong communication skills, with the ability to engage directly with client team. A consultative and collaborative approach is essential.
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Tech Data Engineer Location: Hyderabad/Pune Experience: 6yrs Role Description This is a contract role for a Tech Data Engineer with 6 years of experience. The position is on-site and located in Hyderabad. The Tech Data Engineer will be responsible for managing data center operations, troubleshooting issues, cabling, and analyzing data. Daily tasks include ensuring data integrity, performing maintenance on data systems, and supporting the team with clear communication and problem-solving skills. • transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques • design, model, develop, and improve data pipelines and data products • engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively • develop, train, and apply machine-learning models to make better predictions, automate manual processes, and solve challenging business problems • ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. • build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues • understand, represent, and advocate for client needs 6+ years of Experience in • comprehensive understanding and ability to apply data engineering techniques, from event streaming and real-time analytics to computational grids and graph processing engines • curious to learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind • strong command of at least one language among Python, Java, Golang • understanding of data management and database technologies including SQL/NoSQL • understanding of data products, data structures and data manipulation techniques including classification, parsing, pattern matching • experience with Databricks, ADLS, Delta Lake/Tables, ETL tools would be an asset • good understanding of engineering practices and software development lifecycle • enthusiastic, self-motivated and client-focused • strong communicator, from making presentations to technical writing • bachelor’s degree in relevant discipline or equivalent experience Qualifications Strong Analytical Skills and Troubleshooting abilities Experience in Cabling and Data Center Operations Excellent Communication skills Ability to work effectively on-site in Hyderabad Relevant certifications such as Cisco Certified Network Associate (CCNA) or similar are a plus
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Title: R&D Data Engineer About The Job At Sanofi, we’re committed to providing the next-gen healthcare that patients and customers need. It’s about harnessing data insights and leveraging AI responsibly to search deeper and solve sooner than ever before. Join our R&D Data & AI Products and Platforms Team as an R&D Data Engineer and you can help make it happen. What You Will Be Doing Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives. The R&D Data & AI Products and Platforms Team is a key team within R&D Digital, focused on developing and delivering Data and AI products for R&D use cases. This team plays a critical role in pursuing broader democratization of data across R&D and providing the foundation to scale AI/ML, advanced analytics, and operational analytics capabilities. As an R&D Data Engineer , you will join this dynamic team committed to driving strategic and operational digital priorities and initiatives in R&D. You will work as a part of a Data & AI Product Delivery Pod, lead by a Product Owner, in an agile environment to deliver Data & AI Products. As a part of this team, you will be responsible for the design and development of data pipelines and workflows to ingest, curate, process, and store large volumes of complex structured and unstructured data. You will have the ability to work on multiple data products serving multiple areas of the business. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofi’s Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience build products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renown leaders and academics in machine learning to further develop your skillsets. We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started? Main Responsibilities Data Product Engineering: Provide input into the engineering feasibility of developing specific R&D Data/AI Products Provide input to Data/AI Product Owner and Scrum Master to support with planning, capacity, and resource estimates Design, build, and maintain scalable and reusable ETL / ELT pipelines to ingest, transform, clean, and load data from sources into central platforms / repositories Structure and provision data to support modeling and data discovery, including filtering, tagging, joining, parsing and normalizing data Collaborate with Data/AI Product Owner and Scrum Master to share progress on engineering activities and inform of any delays, issues, bugs, or risks with proposed remediation plans Design, develop, and deploy APIs, data feeds, or specific features required by product design and user stories Optimize data workflows to drive high performance and reliability of implemented data products Oversee and support junior engineer with Data/AI Product testing requirements and execution Innovation & Team Collaboration Stay current on industry trends, emerging technologies, and best practices in data product engineering Contribute to a team culture of of innovation, collaboration, and continuous learning within the product team About You Key Functional Requirements & Qualifications: Bachelor’s degree in software engineering or related field, or equivalent work experience 3-5 years of experience in data product engineering, software engineering, or other related field Understanding of R&D business and data environment preferred Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Key Technical Requirements & Qualifications Proficiency with data analytics and statistical software (incl. SQL, Python, Java, Excel, AWS, Snowflake, Informatica) Deep understanding and proven track record of developing data pipelines and workflows Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.
Posted 1 day ago
4.0 years
3 - 6 Lacs
Hyderābād
On-site
Join one of the nation’s leading and most impactful health care performance improvement companies. Over the years, Health Catalyst has achieved and documented clinical, operational, and financial improvements for many of the nation’s leading healthcare organizations. We are also increasingly serving international markets. Our mission is to be the catalyst for massive, measurable, data-informed healthcare improvement through: Data: integrate data in a flexible, open & scalable platform to power healthcare’s digital transformation Analytics: deliver analytic applications & services that generate insight on how to measurably improve Expertise: provide clinical, financial & operational experts who enable & accelerate improvement Engagement: attract, develop and retain world-class team members by being a best place to work POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and realtime workloads. Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements REQUIRED SKILLS AND QUALIFICATIONS: Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. Data Modeling: Ability to design schemas and data models tailored for high-throughput use cases. Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: Certification in any of the mentioned database technologies. Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. Knowledge of distributed systems and large-scale data processing. Familiarity with cloud-based database solutions and infrastructure. Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered The above statements describe the general nature and level of work being performed in this job function. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned by Health Catalyst . Studies show that candidates from underrepresented groups are less likely to apply for roles if they don’t have 100% of the qualifications shown in the job posting. While each of our roles have core requirements, please thoughtfully consider your skills and experience and decide if you are interested in the position. If you feel you may be a good fit for the role, even if you don’t meet all of the qualifications, we hope you will apply. If you feel you are lacking the core requirements for this position, we encourage you to continue exploring our careers page for other roles for which you may be a better fit. At Health Catalyst, we appreciate the opportunity to benefit from the diverse backgrounds and experiences of others. Because of our deep commitment to respect every individual, Health Catalyst is an equal opportunity employer.
Posted 1 day ago
1.0 - 2.0 years
2 - 3 Lacs
Mohali
On-site
Job Description- Flutter Developer Job Location: Mohali Experience- 1-2 years Mobile App Development Build responsive and scalable cross-platform mobile apps using Flutter (iOS & Android). Convert UI/UX designs into functional mobile app components. Use Flutter widgets effectively to craft clean and reusable code. API Integration Consume RESTful APIs and WebSockets to connect with backend services. Handle data parsing (JSON) and error handling gracefully. Performance Optimization Optimize application performance, responsiveness, and speed. Use tools like Flutter DevTools for debugging and profiling. Testing & Debugging Write unit, widget, and integration tests. Debug and resolve technical issues. App Store Deployment Prepare and publish apps to the Apple App Store and Google Play Store. Handle app versioning, code signing, and platform-specific build issues. Cross-functional Responsibilities · Knowledge of Backend Skills ( Nodejs, Php) is a Plus · Collaborate with designers, product managers, and QA engineers. · Review code (pull requests), suggest improvements, and mentor junior devs if needed. · Experience with Git and version control workflows. · Knowledge of containerization (Docker) is a plus. · Ability to troubleshoot both frontend and backend bugs. For further queries call/WhatsApp on 7743059799 #flutterDeveloper #IOS #Andriod #nodejs #php # MobileAppDevelopment #APIintegration. Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Application Question(s): How many years of experience do you have in flutter role? Do you have experience in Mobile App development? Do have experience in API integration? Location: Mohali, Punjab (Required) Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description : SDET (Software Development Engineer in Test) Notice Period Requirement: Immediately to 2 Month(Officially) Job Locations: Gurgaon/Delhi Experience: 5 to 8 Years Skills: SDET, Automation, Java programming, Selenium, Playwright, Cucumber, Rest Assured, API Coding(All Mandatory) Job Type : Full-Time Job Description We are seeking an experienced and highly skilled SDET (Software Development Engineer in Test) to join our Quality Engineering team. The ideal candidate will possess a strong background in test automation with API testing or mobile testing or Web, with hands-on experience in creating robust automation frameworks and scripts. This role demands a thorough understanding of quality engineering practices, microservices architecture, and software testing tools. Key Responsibilities : - Design and develop scalable and modular automation frameworks using best industry practices such as the Page Object Model. - Automate testing for distributed, highly scalable systems. - Create and execute test scripts for GUI-based, API, and mobile applications. - Perform end-to-end testing for APIs, ensuring thorough validation of request and response schemas, status codes, and exception handling. - Conduct API testing using tools like Rest Assured, SOAP UI, NodeJS, and Postman, and validate data with serialization techniques (e.g., POJO classes). - Implement and maintain BDD/TDD frameworks using tools like Cucumber, TestNG, or JUnit. - Write and optimize SQL queries for data validation and backend testing. - Integrate test suites into test management systems and CI/CD pipelines using tools like Maven, Gradle, and Git. - Mentor team members and quickly adapt to new technologies and tools. - Select and implement appropriate test automation tools and strategies based on project needs. - Apply design patterns, modularization, and user libraries for efficient framework creation. - Collaborate with cross-functional teams to ensure the quality and scalability of microservices and APIs. Must-Have Skills : - Proficiency in designing and developing automation frameworks from scratch. - Strong programming skills in Java, Groovy, or JavaScript with a solid understanding of OOP concepts. - Hands-on experience with at least one GUI automation tool (desktop/mobile). Experience with multiple tools is an advantage. - In-depth knowledge of API testing and microservices architecture. - Experience with BDD and TDD methodologies and associated tools. - Familiarity with SOAP and REST principles. - Expertise in parsing and validating complex JSON and XML responses. - Ability to create and manage test pipelines in CI/CD environments. Nice-to-Have Skills : - Experience with multiple test automation tools for GUI or mobile platforms. - Knowledge of advanced serialization techniques and custom test harness implementation. - Exposure to various test management tools and automation strategies. Qualifications : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 5 Years+ in software quality engineering and test automation. - Strong analytical and problem-solving skills with attention to detail.
Posted 1 day ago
0 years
0 Lacs
India
Remote
Title: SnapLogic Developer Experience: 6+ yrs Timings: 8:30PM to 5:30AM EST Timezone Location: Remote Salary: Upto 1 Lakh/ month (Depend upon experience) *This is a Freelancing role. Not a permanant position Role: We are seeking a Senior SnapLogic Developer to lead the design, development, and maintenance of complex data integration pipelines using SnapLogic. This role will play a key part in managing all incoming and outgoing data flows across the enterprise, with a strong emphasis on EDI (X12) parsing, Salesforce integrations, and SnapLogic best practices. The ideal candidate is a technical expert who can also mentor junior developers and contribute to the evolution of our integration standards and architecture. Key Responsibilities: Lead and own SnapLogic pipeline development for various enterprise integration needs. Design, build, and maintain scalable integration workflows involving EDI X12 formats, Salesforce Snaps, REST/SOAP APIs, and file-based transfers (SFTP, CSV, etc.). Parse and transform EDI documents, particularly X12 837, 835, 834, 270/271, into target system formats like Salesforce, databases, or flat files. Manage and monitor SnapLogic dataflows for production and non-production environments. Collaborate with business and technical teams to understand integration requirements and deliver reliable solutions. Lead a team of SnapLogic developers, providing technical guidance, mentorship, and code reviews. Document integration flows, error handling mechanisms, retry logic, and operational procedures. Establish and enforce SnapLogic development standards and reusable components (SnapPacks, pipelines, assets). Collaborate with DevOps/SecOps to ensure deployments are automated and compliant. Troubleshoot issues in existing integrations and optimize performance where needed. Required Skills and Experience: Proven expertise in parsing and transforming EDI X12 transactions (especially 837, 835, 834, 270/271). Strong experience using Salesforce Snaps, including data sync between Salesforce and external systems. Deep understanding of SnapLogic architecture, pipeline execution patterns, error handling, and best practices. Experience working with REST APIs, SOAP services, OAuth, JWT, and token management in integrations. Knowledge of JSON, XML, XSLT, and data transformation logic. Strong leadership and communication skills; ability to mentor junior developers and lead a small team. Comfortable working in Agile environments with tools like Jira, Confluence, Git, etc. Experience with data privacy and security standards (HIPAA, PHI) is a plus, especially in healthcare integrations.
Posted 1 day ago
0 years
0 Lacs
Kozhikode, Kerala, India
On-site
Pfactorial Technologies is a fast-growing AI/ML/NLP company at the forefront of innovation in Generative AI, voice technology, and intelligent automation. We specialize in building next-gen solutions using LLMs, agent frameworks, and custom ML pipelines. Join our dynamic team to work on real-world challenges and shape the future of AI driven systems and smart automation.. We are looking for AI/ML Engineer – LLMs, Voice Agents & Workflow Automation (0–3Yrs Experience ) Experience with LLM integration pipelines (OpenAI, Vertex AI, Hugging Face models) Hands on experience in working with voice agents, TTS, STT, caching mechanisms, and ElevenLabs voice technology Strong understanding of vector databases like Qdrant or Milvus Hands-on experience with Langchain, LlamaIndex, or agent frameworks (e.g., AutoGen, CrewAI) Knowledge of FastAPI, Celery, and orchestration of ML/AI services Familiarity with cloud deployment on GCP, AWS, or Azure Ability to build and fine-tune matching, ranking, or retrieval-based models Developing agentic workflows for automation Implementing NLP pipelines for parsing, summarizing, and communication (e.g., email bots, script generators) Comfortable working with graph-based data representation and integrating with frontend Experience in multi-agent collaboration frameworks like Google Agent2Agent Practical experience in data scraping and enrichment for ML training datasets Understanding of compliance in AI applications 👉 For more updates, follow us on our LinkedIn page! https://in.linkedin.com/company/pfactorial
Posted 1 day ago
5.0 years
0 Lacs
New Delhi, Delhi, India
Remote
Location: Remote (India-based preferred) Type: Full-time | Founding Team | High Equity Company: Flickd (www.flickd.in) About the Role We’re building India’s most advanced virtual try-on engine — think Doji meets TryOnDiffusion, but optimized for real-world speed, fashion, and body diversity. As our ML Engineer (Computer Vision + Try-On) , you’ll own the end-to-end pipeline : from preprocessing user/product images to generating hyper-realistic try-on results with preserved pose, skin, texture, and identity. You’ll have full autonomy to build, experiment, and ship — working directly with React, Spring Boot, DevOps, and design folks already in place. This is not a junior researcher role. This is one person building the brain of the system - and setting the foundation for India's biggest visual shopping innovation. What You’ll Build Stage 1: User Image Preprocessing Human parsing (face, body, hair), pose detection, face/limb alignment Auto orientation, canvas resizing, brightness/contrast normalization Stage 2: Product Image Processing Background removal, garment segmentation (SAM/U^2-Net/YOLOv8) Handle occlusions, transparent clothes, long sleeves, etc. Stage 3: Try-On Engine Implement and iterate on CP-VTON / TryOnDiffusion / FlowNet Fine-tune on custom data for realism, garment drape, identity retention Inference Optimisation TorchScript / ONNX, batching, inference latency minimization Collaborate with DevOps for Lambda/EC2 + GPU deployment Postprocessing Alpha blending, edge smoothing, fake shadows, cloth-body warps You’re a Fit If You: Have 2–5 years in ML/CV with real shipped work (not just notebooks) Have worked on: human parsing, pose estimation, cloth warping, GANs Are hands-on with PyTorch , OpenCV, Segmentation Models, Flow or ViT Can replicate models from arXiv fast, and care about output quality Want to own a system seen by millions , not just improve metrics Stack You’ll Use PyTorch, ONNX, TorchScript, Hugging Face DensePose, OpenPose, Segment Anything, Diffusion Models Docker, Redis, AWS Lambda, S3 (infra is already set up) MLflow or DVC (can be implemented from scratch) For exceptional talent, we’re flexible on cash vs equity split. Why This Is a Rare Opportunity Build the core AI product that powers a breakout consumer app Work in a zero BS, full-speed team (React, SpringBoot, DevOps, Design all in place) Be the founding ML brain and shape all future hires Ship in weeks, not quarters — and see your output in front of users instantly Apply now, or DM Dheekshith (Founder) on LinkedIn with your GitHub or project links. Let’s build something India’s never seen before.
Posted 1 day ago
3.0 years
0 Lacs
India
Remote
Job Title: AI Engineer – Web Crawling & Field Data Extraction Location: [Remote] Department: Engineering / Data Science Experience Level: Mid to Senior Employment Type: Contract to Hire About the Role: We are looking for a skilled AI Engineer with strong experience in web crawling, data parsing, and AI/ML-driven information extraction to join our team. You will be responsible for developing systems that automatically crawl websites, extract structured and unstructured data, and intelligently map the extracted content to predefined fields for business use. This role combines practical web scraping, NLP techniques, and AI model integration to automate workflows that involve large-scale content ingestion. Key Responsibilities: Design and develop automated web crawlers and scrapers to extract information from various websites and online resources. Implement robust and scalable data extraction pipelines that convert semi-structured/unstructured data into structured field-level data. Use Natural Language Processing (NLP) and ML models to intelligently interpret and map extracted content to specific form fields or schemas. Build systems that can handle dynamic web content, captchas, JavaScript-rendered pages, and anti-bot mechanisms. Collaborate with frontend/backend teams to integrate extracted data into user-facing applications. Monitor crawler performance, ensure compliance with legal/data policies, and manage scheduling, deduplication, and logging. Optimize crawling strategies using AI/heuristics for prioritization, entity recognition, and data validation. Create tools for auto-filling forms or generating structured records from crawled data. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field. 3+ years of hands-on experience with web scraping frameworks (e.g., Scrapy, Puppeteer, Playwright, Selenium). Proficiency in Python, with experience in BeautifulSoup, lxml, requests, aiohttp, or similar libraries. Experience with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers) to parse and map extracted data. Familiarity with ML-based data classification, extraction, and field mapping. Knowledge of structured data formats (JSON, XML, CSV) and RESTful APIs. Experience handling anti-scraping techniques and rate-limiting controls. Strong problem-solving skills, clean coding practices, and the ability to work independently. Nice-to-Have Experience with AI form understanding (e.g., LayoutLM, DocAI, OCR). Familiarity with Large Language Models (LLMs) for intelligent data labeling or validation. Exposure to data pipelines, ETL frameworks, or orchestration tools (Airflow, Prefect). Understanding of data privacy, compliance, and ethical crawling standards. Why Join Us? Work on cutting-edge AI applications in real-world automation. Be part of a fast-growing and collaborative team. Opportunity to lead and shape intelligent data ingestion solutions from the ground up.
Posted 1 day ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Life at MX We are driven by our moral imperative to advance mankind - and it all starts with our people, product and purpose. We always carry a deep sense of drive and passion with us. If you thrive in a challenging work environment, surrounded by incredible team members who will help you grow, MX is the right place for you. Come build with us and be part of an award-winning company that’s helping create meaningful and lasting change in the financial industry. We’re looking for a highly skilled engineer who thrives at the intersection of automation, AI, and web data extraction . You will be responsible for building advanced web scraping systems, designing evasion strategies to bypass anti-bot mechanisms, and integrating intelligent data extraction techniques. This role requires strong expertise in TypeScript , Puppeteer (or Playwright) , and modern scraping architectures, along with a practical understanding of bot detection mechanisms and machine learning for smarter data acquisition. Key Responsibilities Design and maintain scalable web scraping pipelines using Puppeteer, Playwright, or headless browsers Implement evasion techniques to bypass bot detection systems (e.g., fingerprint spoofing, dynamic delays, proxy rotation) Leverage AI/ML models for intelligent parsing, CAPTCHA solving, and anomaly detection Handle large-scale data collection with distributed scraping infrastructure Monitor scraping performance, detect bans, and auto-recover from failure states Build structured outputs (e.g., JSON, GraphQL feeds) from semi-structured/unstructured sources Collaborate with product and data science teams to shape high-quality, reliable data inputs Ensure compliance with legal and ethical scraping practice Required Skills & Experience 4+ years of experience building and scaling web scraping tools Strong proficiency in TypeScript and Node.js Hands-on with Puppeteer, Playwright, or Selenium for browser automation Deep understanding of how bot detection systems work (e.g., Cloudflare, Akamai, hCaptcha) Experience with proxy management, user-agent spoofing, fingerprint manipulation Familiarity with CAPTCHA solving libraries/APIs, ML-based screen parsing, OCR Working knowledge of AI/ML for parsing or automation (e.g., Tesseract, TensorFlow, OpenAI APIs) Comfortable working with large-scale data pipelines, queues (e.g., Kafka, RabbitMQ), and headless fleet management Additional Skills Experience with cloud infrastructure (AWS/GCP) for scalable scraping jobs CI/CD and containerization (Docker, Kubernetes) for deployment Knowledge of ethical and legal considerations around data scraping Contributions to open-source scraping frameworks or tools Work Environment In this role, a significant aspect of the job involves working in the office for a standard 40-hour workweek. We believe that the collaborative nature of our work and the face-to-face interactions among team members are essential for fostering a dynamic and productive work environment. Being present in the office enables seamless communication, facilitates quick decision-making, and encourages spontaneous collaboration that contributes to the overall success of our projects. We value the synergy that comes from having our team members physically together, allowing for immediate problem-solving, idea exchange, and team building. Compensation The expected earnings for this role could be comprised of a base salary and other forms of cash compensation, such as bonus or commissions as applicable. This pay range is just one component of MX’s total rewards package. MX takes a number of factors into account when determining individual starting pay, including job and level they are hired into, location, skillset, peer compensation. Please note applicants applying for this position must have the legal right to work in India without the need for sponsorship. We are unable to provide work sponsorship for this role, and candidates should be able to verify their eligibility to work in the country independently. Proof of eligibility to work in India will be required as part of the hiring process.
Posted 1 day ago
0 years
2 - 3 Lacs
Ahmedabad, Gujarat, India
On-site
Company Profile Nextgen is a UK based company that provides services for mobile operators world-wide. We are a growing company with about 300+ employees and offices in Europe, Asia, India, Cairo and the US. Our core competency is the provision of services around the commercial aspects of mobile roaming, data and financial clearing. Our services are based on proprietary software and operated centrally. The software is based on Web and Oracle technology and its main purpose consists in processing and distribution of roaming data, settlement of charges between the operators and providing business intelligence applications to our customers. Role Purpose & Context Accounts Assistants in the Receivable Management Team are required to make sure that all GSM and SMS invoices are generated within deadline of invoice generation as per operations calendar. Team members will allocate all bank receipts within 24 hours of receipt loading. Responsibilities Invoice Generation & Dispatch Sanity check of GSM & SMS data received from DCH/Client for the invoice generation. Data Loading & Invoices generation of GSM & SMS data within deadline. Checking of error logs and updating same to "All Clients Sheet" (Missing Roaming Agreement Sheet) Sending generated invoices to client confirmation through Issue ID for there respective client. Creation of Hub parent position. Checking of Payable and Receivable RAP's once data are loaded and invoices are generated accordingly. Cross Checking of MFS/SMS data to the invoice generated before invoices are dispatched. Manual Check on duplicate TAP File billing. Timely updating of "Data Parsing & Invoice Generation" Sheet during invoice generation. Creation of MRA's once received from Client Regeneration of invoice once RAP's are approved by Account Manager. Notify to Account Manager to generate Credit Note/Debit Note if invoice is generated with negative value. Sharing of formatted data to shared path for the future referance. Cash Allocation To allocate the receipts or take relevant action on daily basis within 24 hours To clear remittance queue on OTRS and same are shared to relevant folders. To chase missing PN on every alternative day and if it is not received after being chased for 3 times from the system and 1 personalized email to partner then Log an Issue to relevant Account Manager. To take confirmation from AM in case of FX loss/Gain or any other queries (issues) related to PN for which back office is not authorized to take further action through Issue Log. To chase Missing Invoices required for allocation of Payment from APEX or Operations Mailbox. Providing and replying to the mails of missing invoice request Sending requested payment notification to the partner / FCH Chasing and follow up of missing invoice for our customers Requirements Bachelor's degree in business, Accounts, or related field preferred Strong communication and relationship-building skills Experience in invoice reconciliation Ability to work in a fast-paced, dynamic environment with a focus on results Excellent analytical skills and attention to detail Proficient in Microsoft Office and CRM software Strong organizational skills Proficiency in Microsoft Office Ability to harness financial data to inform decisions Benefits Health Insurance Provident Fund, Gratuity 5 days working (Monday-Friday) Employee Engagement activities in a Quarter
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The parsing job market in India is thriving, with a growing demand for professionals skilled in parsing techniques across various industries. Employers are actively seeking individuals who can effectively extract and analyze structured data from different sources. If you are a job seeker looking to explore parsing roles in India, this article will provide you with valuable insights and guidance.
These cities are known for their vibrant tech industries and offer numerous opportunities for individuals interested in parsing roles.
The average salary range for parsing professionals in India varies based on experience levels. Entry-level positions can expect to earn between INR 3-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 8-15 lakhs per annum.
In the field of parsing, a typical career path may include the following progression: - Junior Developer - Software Engineer - Senior Developer - Tech Lead - Architect
As professionals gain experience and expertise in parsing techniques, they can advance to higher roles with increased responsibilities.
In addition to parsing skills, individuals pursuing roles in this field are often expected to possess or develop the following skills: - Data analysis - Programming languages (e.g., Python, Java) - Knowledge of databases - Problem-solving abilities
As you prepare for parsing roles in India, remember to showcase your expertise in parsing techniques and related skills during interviews. Stay updated with the latest trends in the field and practice answering common interview questions to boost your confidence. With dedication and perseverance, you can secure a rewarding career in parsing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough