Jobs
Interviews

890 Parsing Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.5 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 7.5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

About the Role: We’re looking for an experienced and proactive WordPress developer to join our offshore development team. In this role, you’ll collaborate closely with client stakeholders and internal teams to build, enhance, and maintain enterprise-level WordPress solutions. If you have a strong technical background, a consultative mindset, and enjoy solving complex problems, we’d love to hear from you. Key Responsibilities Actively participate in technical discussions with client teams and contribute ideas to enhance and scale WordPress platforms. Define clear and efficient implementation approaches for new features, custom components, and system integrations. Take ownership of the technical delivery process, ensuring code quality, performance, and maintainability. Proactively engage in addressing technical queries from internal teams and client stakeholders with clarity and collaboration. Set up and manage development, staging, and production environments, including deployment workflows. Drive a culture of clean code, re-usability, and thoughtful architecture across the team. Skills & Experience WordPress Development: Deep experience with custom themes, plugins, and Gutenberg blocks. Strong knowledge of WordPress core architecture and database structure. PHP & Laravel: Solid PHP skills with familiarity in Laravel (or similar frameworks) for complementary back-end needs. Scripting & Tools: Experience with WP-CLI, creating custom scripts, and automating WordPress tasks. API Integration: Proficient in parsing XML and integrating with external APIs using REST and SOAP protocols. Search Integration: Working knowledge of Elastic Search and its integration with WordPress for advanced search capabilities. Infrastructure & Deployment: Comfortable with the LEMP stack (Linux, Nginx, MySQL, PHP) and experience setting up CI/CD pipelines for deployments. Communication & Leadership: Strong communication skills, with the ability to engage directly with client team. A consultative and collaborative approach is essential.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Tech Data Engineer Location: Hyderabad/Pune Experience: 6yrs Role Description This is a contract role for a Tech Data Engineer with 6 years of experience. The position is on-site and located in Hyderabad. The Tech Data Engineer will be responsible for managing data center operations, troubleshooting issues, cabling, and analyzing data. Daily tasks include ensuring data integrity, performing maintenance on data systems, and supporting the team with clear communication and problem-solving skills. • transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques • design, model, develop, and improve data pipelines and data products • engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively • develop, train, and apply machine-learning models to make better predictions, automate manual processes, and solve challenging business problems • ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. • build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues • understand, represent, and advocate for client needs 6+ years of Experience in • comprehensive understanding and ability to apply data engineering techniques, from event streaming and real-time analytics to computational grids and graph processing engines • curious to learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind • strong command of at least one language among Python, Java, Golang • understanding of data management and database technologies including SQL/NoSQL • understanding of data products, data structures and data manipulation techniques including classification, parsing, pattern matching • experience with Databricks, ADLS, Delta Lake/Tables, ETL tools would be an asset • good understanding of engineering practices and software development lifecycle • enthusiastic, self-motivated and client-focused • strong communicator, from making presentations to technical writing • bachelor’s degree in relevant discipline or equivalent experience Qualifications Strong Analytical Skills and Troubleshooting abilities Experience in Cabling and Data Center Operations Excellent Communication skills Ability to work effectively on-site in Hyderabad Relevant certifications such as Cisco Certified Network Associate (CCNA) or similar are a plus

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Title: R&D Data Engineer About The Job At Sanofi, we’re committed to providing the next-gen healthcare that patients and customers need. It’s about harnessing data insights and leveraging AI responsibly to search deeper and solve sooner than ever before. Join our R&D Data & AI Products and Platforms Team as an R&D Data Engineer and you can help make it happen. What You Will Be Doing Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives. The R&D Data & AI Products and Platforms Team is a key team within R&D Digital, focused on developing and delivering Data and AI products for R&D use cases. This team plays a critical role in pursuing broader democratization of data across R&D and providing the foundation to scale AI/ML, advanced analytics, and operational analytics capabilities. As an R&D Data Engineer , you will join this dynamic team committed to driving strategic and operational digital priorities and initiatives in R&D. You will work as a part of a Data & AI Product Delivery Pod, lead by a Product Owner, in an agile environment to deliver Data & AI Products. As a part of this team, you will be responsible for the design and development of data pipelines and workflows to ingest, curate, process, and store large volumes of complex structured and unstructured data. You will have the ability to work on multiple data products serving multiple areas of the business. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofi’s Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience build products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renown leaders and academics in machine learning to further develop your skillsets. We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started? Main Responsibilities Data Product Engineering: Provide input into the engineering feasibility of developing specific R&D Data/AI Products Provide input to Data/AI Product Owner and Scrum Master to support with planning, capacity, and resource estimates Design, build, and maintain scalable and reusable ETL / ELT pipelines to ingest, transform, clean, and load data from sources into central platforms / repositories Structure and provision data to support modeling and data discovery, including filtering, tagging, joining, parsing and normalizing data Collaborate with Data/AI Product Owner and Scrum Master to share progress on engineering activities and inform of any delays, issues, bugs, or risks with proposed remediation plans Design, develop, and deploy APIs, data feeds, or specific features required by product design and user stories Optimize data workflows to drive high performance and reliability of implemented data products Oversee and support junior engineer with Data/AI Product testing requirements and execution Innovation & Team Collaboration Stay current on industry trends, emerging technologies, and best practices in data product engineering Contribute to a team culture of of innovation, collaboration, and continuous learning within the product team About You Key Functional Requirements & Qualifications: Bachelor’s degree in software engineering or related field, or equivalent work experience 3-5 years of experience in data product engineering, software engineering, or other related field Understanding of R&D business and data environment preferred Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Key Technical Requirements & Qualifications Proficiency with data analytics and statistical software (incl. SQL, Python, Java, Excel, AWS, Snowflake, Informatica) Deep understanding and proven track record of developing data pipelines and workflows Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.

Posted 2 days ago

Apply

4.0 years

3 - 6 Lacs

Hyderābād

On-site

Join one of the nation’s leading and most impactful health care performance improvement companies. Over the years, Health Catalyst has achieved and documented clinical, operational, and financial improvements for many of the nation’s leading healthcare organizations. We are also increasingly serving international markets. Our mission is to be the catalyst for massive, measurable, data-informed healthcare improvement through: Data: integrate data in a flexible, open & scalable platform to power healthcare’s digital transformation Analytics: deliver analytic applications & services that generate insight on how to measurably improve Expertise: provide clinical, financial & operational experts who enable & accelerate improvement Engagement: attract, develop and retain world-class team members by being a best place to work POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and realtime workloads. Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements REQUIRED SKILLS AND QUALIFICATIONS: Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. Data Modeling: Ability to design schemas and data models tailored for high-throughput use cases. Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: Certification in any of the mentioned database technologies. Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. Knowledge of distributed systems and large-scale data processing. Familiarity with cloud-based database solutions and infrastructure. Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered The above statements describe the general nature and level of work being performed in this job function. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned by Health Catalyst . Studies show that candidates from underrepresented groups are less likely to apply for roles if they don’t have 100% of the qualifications shown in the job posting. While each of our roles have core requirements, please thoughtfully consider your skills and experience and decide if you are interested in the position. If you feel you may be a good fit for the role, even if you don’t meet all of the qualifications, we hope you will apply. If you feel you are lacking the core requirements for this position, we encourage you to continue exploring our careers page for other roles for which you may be a better fit. At Health Catalyst, we appreciate the opportunity to benefit from the diverse backgrounds and experiences of others. Because of our deep commitment to respect every individual, Health Catalyst is an equal opportunity employer.

Posted 2 days ago

Apply

1.0 - 2.0 years

2 - 3 Lacs

Mohali

On-site

Job Description- Flutter Developer Job Location: Mohali Experience- 1-2 years Mobile App Development Build responsive and scalable cross-platform mobile apps using Flutter (iOS & Android). Convert UI/UX designs into functional mobile app components. Use Flutter widgets effectively to craft clean and reusable code. API Integration Consume RESTful APIs and WebSockets to connect with backend services. Handle data parsing (JSON) and error handling gracefully. Performance Optimization Optimize application performance, responsiveness, and speed. Use tools like Flutter DevTools for debugging and profiling. Testing & Debugging Write unit, widget, and integration tests. Debug and resolve technical issues. App Store Deployment Prepare and publish apps to the Apple App Store and Google Play Store. Handle app versioning, code signing, and platform-specific build issues. Cross-functional Responsibilities · Knowledge of Backend Skills ( Nodejs, Php) is a Plus · Collaborate with designers, product managers, and QA engineers. · Review code (pull requests), suggest improvements, and mentor junior devs if needed. · Experience with Git and version control workflows. · Knowledge of containerization (Docker) is a plus. · Ability to troubleshoot both frontend and backend bugs. For further queries call/WhatsApp on 7743059799 #flutterDeveloper #IOS #Andriod #nodejs #php # MobileAppDevelopment #APIintegration. Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Application Question(s): How many years of experience do you have in flutter role? Do you have experience in Mobile App development? Do have experience in API integration? Location: Mohali, Punjab (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description : SDET (Software Development Engineer in Test) Notice Period Requirement: Immediately to 2 Month(Officially) Job Locations: Gurgaon/Delhi Experience: 5 to 8 Years Skills: SDET, Automation, Java programming, Selenium, Playwright, Cucumber, Rest Assured, API Coding(All Mandatory) Job Type : Full-Time Job Description We are seeking an experienced and highly skilled SDET (Software Development Engineer in Test) to join our Quality Engineering team. The ideal candidate will possess a strong background in test automation with API testing or mobile testing or Web, with hands-on experience in creating robust automation frameworks and scripts. This role demands a thorough understanding of quality engineering practices, microservices architecture, and software testing tools. Key Responsibilities : - Design and develop scalable and modular automation frameworks using best industry practices such as the Page Object Model. - Automate testing for distributed, highly scalable systems. - Create and execute test scripts for GUI-based, API, and mobile applications. - Perform end-to-end testing for APIs, ensuring thorough validation of request and response schemas, status codes, and exception handling. - Conduct API testing using tools like Rest Assured, SOAP UI, NodeJS, and Postman, and validate data with serialization techniques (e.g., POJO classes). - Implement and maintain BDD/TDD frameworks using tools like Cucumber, TestNG, or JUnit. - Write and optimize SQL queries for data validation and backend testing. - Integrate test suites into test management systems and CI/CD pipelines using tools like Maven, Gradle, and Git. - Mentor team members and quickly adapt to new technologies and tools. - Select and implement appropriate test automation tools and strategies based on project needs. - Apply design patterns, modularization, and user libraries for efficient framework creation. - Collaborate with cross-functional teams to ensure the quality and scalability of microservices and APIs. Must-Have Skills : - Proficiency in designing and developing automation frameworks from scratch. - Strong programming skills in Java, Groovy, or JavaScript with a solid understanding of OOP concepts. - Hands-on experience with at least one GUI automation tool (desktop/mobile). Experience with multiple tools is an advantage. - In-depth knowledge of API testing and microservices architecture. - Experience with BDD and TDD methodologies and associated tools. - Familiarity with SOAP and REST principles. - Expertise in parsing and validating complex JSON and XML responses. - Ability to create and manage test pipelines in CI/CD environments. Nice-to-Have Skills : - Experience with multiple test automation tools for GUI or mobile platforms. - Knowledge of advanced serialization techniques and custom test harness implementation. - Exposure to various test management tools and automation strategies. Qualifications : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 5 Years+ in software quality engineering and test automation. - Strong analytical and problem-solving skills with attention to detail.

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Title: SnapLogic Developer Experience: 6+ yrs Timings: 8:30PM to 5:30AM EST Timezone Location: Remote Salary: Upto 1 Lakh/ month (Depend upon experience) *This is a Freelancing role. Not a permanant position Role: We are seeking a Senior SnapLogic Developer to lead the design, development, and maintenance of complex data integration pipelines using SnapLogic. This role will play a key part in managing all incoming and outgoing data flows across the enterprise, with a strong emphasis on EDI (X12) parsing, Salesforce integrations, and SnapLogic best practices. The ideal candidate is a technical expert who can also mentor junior developers and contribute to the evolution of our integration standards and architecture. Key Responsibilities: Lead and own SnapLogic pipeline development for various enterprise integration needs. Design, build, and maintain scalable integration workflows involving EDI X12 formats, Salesforce Snaps, REST/SOAP APIs, and file-based transfers (SFTP, CSV, etc.). Parse and transform EDI documents, particularly X12 837, 835, 834, 270/271, into target system formats like Salesforce, databases, or flat files. Manage and monitor SnapLogic dataflows for production and non-production environments. Collaborate with business and technical teams to understand integration requirements and deliver reliable solutions. Lead a team of SnapLogic developers, providing technical guidance, mentorship, and code reviews. Document integration flows, error handling mechanisms, retry logic, and operational procedures. Establish and enforce SnapLogic development standards and reusable components (SnapPacks, pipelines, assets). Collaborate with DevOps/SecOps to ensure deployments are automated and compliant. Troubleshoot issues in existing integrations and optimize performance where needed. Required Skills and Experience: Proven expertise in parsing and transforming EDI X12 transactions (especially 837, 835, 834, 270/271). Strong experience using Salesforce Snaps, including data sync between Salesforce and external systems. Deep understanding of SnapLogic architecture, pipeline execution patterns, error handling, and best practices. Experience working with REST APIs, SOAP services, OAuth, JWT, and token management in integrations. Knowledge of JSON, XML, XSLT, and data transformation logic. Strong leadership and communication skills; ability to mentor junior developers and lead a small team. Comfortable working in Agile environments with tools like Jira, Confluence, Git, etc. Experience with data privacy and security standards (HIPAA, PHI) is a plus, especially in healthcare integrations.

Posted 2 days ago

Apply

0 years

0 Lacs

Kozhikode, Kerala, India

On-site

Pfactorial Technologies is a fast-growing AI/ML/NLP company at the forefront of innovation in Generative AI, voice technology, and intelligent automation. We specialize in building next-gen solutions using LLMs, agent frameworks, and custom ML pipelines. Join our dynamic team to work on real-world challenges and shape the future of AI driven systems and smart automation.. We are looking for AI/ML Engineer – LLMs, Voice Agents & Workflow Automation (0–3Yrs Experience ) Experience with LLM integration pipelines (OpenAI, Vertex AI, Hugging Face models) Hands on experience in working with voice agents, TTS, STT, caching mechanisms, and ElevenLabs voice technology Strong understanding of vector databases like Qdrant or Milvus Hands-on experience with Langchain, LlamaIndex, or agent frameworks (e.g., AutoGen, CrewAI) Knowledge of FastAPI, Celery, and orchestration of ML/AI services Familiarity with cloud deployment on GCP, AWS, or Azure Ability to build and fine-tune matching, ranking, or retrieval-based models Developing agentic workflows for automation Implementing NLP pipelines for parsing, summarizing, and communication (e.g., email bots, script generators) Comfortable working with graph-based data representation and integrating with frontend Experience in multi-agent collaboration frameworks like Google Agent2Agent Practical experience in data scraping and enrichment for ML training datasets Understanding of compliance in AI applications 👉 For more updates, follow us on our LinkedIn page! https://in.linkedin.com/company/pfactorial

Posted 2 days ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Location: Remote (India-based preferred) Type: Full-time | Founding Team | High Equity Company: Flickd (www.flickd.in) About the Role We’re building India’s most advanced virtual try-on engine — think Doji meets TryOnDiffusion, but optimized for real-world speed, fashion, and body diversity. As our ML Engineer (Computer Vision + Try-On) , you’ll own the end-to-end pipeline : from preprocessing user/product images to generating hyper-realistic try-on results with preserved pose, skin, texture, and identity. You’ll have full autonomy to build, experiment, and ship — working directly with React, Spring Boot, DevOps, and design folks already in place. This is not a junior researcher role. This is one person building the brain of the system - and setting the foundation for India's biggest visual shopping innovation. What You’ll Build Stage 1: User Image Preprocessing Human parsing (face, body, hair), pose detection, face/limb alignment Auto orientation, canvas resizing, brightness/contrast normalization Stage 2: Product Image Processing Background removal, garment segmentation (SAM/U^2-Net/YOLOv8) Handle occlusions, transparent clothes, long sleeves, etc. Stage 3: Try-On Engine Implement and iterate on CP-VTON / TryOnDiffusion / FlowNet Fine-tune on custom data for realism, garment drape, identity retention Inference Optimisation TorchScript / ONNX, batching, inference latency minimization Collaborate with DevOps for Lambda/EC2 + GPU deployment Postprocessing Alpha blending, edge smoothing, fake shadows, cloth-body warps You’re a Fit If You: Have 2–5 years in ML/CV with real shipped work (not just notebooks) Have worked on: human parsing, pose estimation, cloth warping, GANs Are hands-on with PyTorch , OpenCV, Segmentation Models, Flow or ViT Can replicate models from arXiv fast, and care about output quality Want to own a system seen by millions , not just improve metrics Stack You’ll Use PyTorch, ONNX, TorchScript, Hugging Face DensePose, OpenPose, Segment Anything, Diffusion Models Docker, Redis, AWS Lambda, S3 (infra is already set up) MLflow or DVC (can be implemented from scratch) For exceptional talent, we’re flexible on cash vs equity split. Why This Is a Rare Opportunity Build the core AI product that powers a breakout consumer app Work in a zero BS, full-speed team (React, SpringBoot, DevOps, Design all in place) Be the founding ML brain and shape all future hires Ship in weeks, not quarters — and see your output in front of users instantly Apply now, or DM Dheekshith (Founder) on LinkedIn with your GitHub or project links. Let’s build something India’s never seen before.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Job Title: AI Engineer – Web Crawling & Field Data Extraction Location: [Remote] Department: Engineering / Data Science Experience Level: Mid to Senior Employment Type: Contract to Hire About the Role: We are looking for a skilled AI Engineer with strong experience in web crawling, data parsing, and AI/ML-driven information extraction to join our team. You will be responsible for developing systems that automatically crawl websites, extract structured and unstructured data, and intelligently map the extracted content to predefined fields for business use. This role combines practical web scraping, NLP techniques, and AI model integration to automate workflows that involve large-scale content ingestion. Key Responsibilities: Design and develop automated web crawlers and scrapers to extract information from various websites and online resources. Implement robust and scalable data extraction pipelines that convert semi-structured/unstructured data into structured field-level data. Use Natural Language Processing (NLP) and ML models to intelligently interpret and map extracted content to specific form fields or schemas. Build systems that can handle dynamic web content, captchas, JavaScript-rendered pages, and anti-bot mechanisms. Collaborate with frontend/backend teams to integrate extracted data into user-facing applications. Monitor crawler performance, ensure compliance with legal/data policies, and manage scheduling, deduplication, and logging. Optimize crawling strategies using AI/heuristics for prioritization, entity recognition, and data validation. Create tools for auto-filling forms or generating structured records from crawled data. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field. 3+ years of hands-on experience with web scraping frameworks (e.g., Scrapy, Puppeteer, Playwright, Selenium). Proficiency in Python, with experience in BeautifulSoup, lxml, requests, aiohttp, or similar libraries. Experience with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers) to parse and map extracted data. Familiarity with ML-based data classification, extraction, and field mapping. Knowledge of structured data formats (JSON, XML, CSV) and RESTful APIs. Experience handling anti-scraping techniques and rate-limiting controls. Strong problem-solving skills, clean coding practices, and the ability to work independently. Nice-to-Have Experience with AI form understanding (e.g., LayoutLM, DocAI, OCR). Familiarity with Large Language Models (LLMs) for intelligent data labeling or validation. Exposure to data pipelines, ETL frameworks, or orchestration tools (Airflow, Prefect). Understanding of data privacy, compliance, and ethical crawling standards. Why Join Us? Work on cutting-edge AI applications in real-world automation. Be part of a fast-growing and collaborative team. Opportunity to lead and shape intelligent data ingestion solutions from the ground up.

Posted 2 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Life at MX We are driven by our moral imperative to advance mankind - and it all starts with our people, product and purpose. We always carry a deep sense of drive and passion with us. If you thrive in a challenging work environment, surrounded by incredible team members who will help you grow, MX is the right place for you. Come build with us and be part of an award-winning company that’s helping create meaningful and lasting change in the financial industry. We’re looking for a highly skilled engineer who thrives at the intersection of automation, AI, and web data extraction . You will be responsible for building advanced web scraping systems, designing evasion strategies to bypass anti-bot mechanisms, and integrating intelligent data extraction techniques. This role requires strong expertise in TypeScript , Puppeteer (or Playwright) , and modern scraping architectures, along with a practical understanding of bot detection mechanisms and machine learning for smarter data acquisition. Key Responsibilities Design and maintain scalable web scraping pipelines using Puppeteer, Playwright, or headless browsers Implement evasion techniques to bypass bot detection systems (e.g., fingerprint spoofing, dynamic delays, proxy rotation) Leverage AI/ML models for intelligent parsing, CAPTCHA solving, and anomaly detection Handle large-scale data collection with distributed scraping infrastructure Monitor scraping performance, detect bans, and auto-recover from failure states Build structured outputs (e.g., JSON, GraphQL feeds) from semi-structured/unstructured sources Collaborate with product and data science teams to shape high-quality, reliable data inputs Ensure compliance with legal and ethical scraping practice Required Skills & Experience 4+ years of experience building and scaling web scraping tools Strong proficiency in TypeScript and Node.js Hands-on with Puppeteer, Playwright, or Selenium for browser automation Deep understanding of how bot detection systems work (e.g., Cloudflare, Akamai, hCaptcha) Experience with proxy management, user-agent spoofing, fingerprint manipulation Familiarity with CAPTCHA solving libraries/APIs, ML-based screen parsing, OCR Working knowledge of AI/ML for parsing or automation (e.g., Tesseract, TensorFlow, OpenAI APIs) Comfortable working with large-scale data pipelines, queues (e.g., Kafka, RabbitMQ), and headless fleet management Additional Skills Experience with cloud infrastructure (AWS/GCP) for scalable scraping jobs CI/CD and containerization (Docker, Kubernetes) for deployment Knowledge of ethical and legal considerations around data scraping Contributions to open-source scraping frameworks or tools Work Environment In this role, a significant aspect of the job involves working in the office for a standard 40-hour workweek. We believe that the collaborative nature of our work and the face-to-face interactions among team members are essential for fostering a dynamic and productive work environment. Being present in the office enables seamless communication, facilitates quick decision-making, and encourages spontaneous collaboration that contributes to the overall success of our projects. We value the synergy that comes from having our team members physically together, allowing for immediate problem-solving, idea exchange, and team building. Compensation The expected earnings for this role could be comprised of a base salary and other forms of cash compensation, such as bonus or commissions as applicable. This pay range is just one component of MX’s total rewards package. MX takes a number of factors into account when determining individual starting pay, including job and level they are hired into, location, skillset, peer compensation. Please note applicants applying for this position must have the legal right to work in India without the need for sponsorship. We are unable to provide work sponsorship for this role, and candidates should be able to verify their eligibility to work in the country independently. Proof of eligibility to work in India will be required as part of the hiring process.

Posted 2 days ago

Apply

0 years

2 - 3 Lacs

Ahmedabad, Gujarat, India

On-site

Company Profile Nextgen is a UK based company that provides services for mobile operators world-wide. We are a growing company with about 300+ employees and offices in Europe, Asia, India, Cairo and the US. Our core competency is the provision of services around the commercial aspects of mobile roaming, data and financial clearing. Our services are based on proprietary software and operated centrally. The software is based on Web and Oracle technology and its main purpose consists in processing and distribution of roaming data, settlement of charges between the operators and providing business intelligence applications to our customers. Role Purpose & Context Accounts Assistants in the Receivable Management Team are required to make sure that all GSM and SMS invoices are generated within deadline of invoice generation as per operations calendar. Team members will allocate all bank receipts within 24 hours of receipt loading. Responsibilities Invoice Generation & Dispatch Sanity check of GSM & SMS data received from DCH/Client for the invoice generation. Data Loading & Invoices generation of GSM & SMS data within deadline. Checking of error logs and updating same to "All Clients Sheet" (Missing Roaming Agreement Sheet) Sending generated invoices to client confirmation through Issue ID for there respective client. Creation of Hub parent position. Checking of Payable and Receivable RAP's once data are loaded and invoices are generated accordingly. Cross Checking of MFS/SMS data to the invoice generated before invoices are dispatched. Manual Check on duplicate TAP File billing. Timely updating of "Data Parsing & Invoice Generation" Sheet during invoice generation. Creation of MRA's once received from Client Regeneration of invoice once RAP's are approved by Account Manager. Notify to Account Manager to generate Credit Note/Debit Note if invoice is generated with negative value. Sharing of formatted data to shared path for the future referance. Cash Allocation To allocate the receipts or take relevant action on daily basis within 24 hours To clear remittance queue on OTRS and same are shared to relevant folders. To chase missing PN on every alternative day and if it is not received after being chased for 3 times from the system and 1 personalized email to partner then Log an Issue to relevant Account Manager. To take confirmation from AM in case of FX loss/Gain or any other queries (issues) related to PN for which back office is not authorized to take further action through Issue Log. To chase Missing Invoices required for allocation of Payment from APEX or Operations Mailbox. Providing and replying to the mails of missing invoice request Sending requested payment notification to the partner / FCH Chasing and follow up of missing invoice for our customers Requirements Bachelor's degree in business, Accounts, or related field preferred Strong communication and relationship-building skills Experience in invoice reconciliation Ability to work in a fast-paced, dynamic environment with a focus on results Excellent analytical skills and attention to detail Proficient in Microsoft Office and CRM software Strong organizational skills Proficiency in Microsoft Office Ability to harness financial data to inform decisions Benefits Health Insurance Provident Fund, Gratuity 5 days working (Monday-Friday) Employee Engagement activities in a Quarter

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Software Engineer - Content Parsing The Opportunity We're looking for a talented and detail-oriented Software Engineer - Content Parsing to join our dynamic team. In this role, you'll be crucial in extracting, categorizing, and structuring vast amounts of content from various web sources. You'll leverage your expertise in Python and related parsing technologies to build robust, scalable, and highly accurate parsing solutions. This position requires a strong focus on data quality, comprehensive testing, and the ability to implement effective alerting and notification systems. Responsibilities : Design, develop, and maintain robust and scalable HTML parsing solutions to extract diverse web content. Implement advanced content categorization logic to accurately classify and tag extracted data based on predefined schemas and business rules, incorporating AI/ML techniques where applicable. Develop and integrate alerting and notification systems to monitor parsing performance, identify anomalies, and report on data quality issues. Write comprehensive unit, integration, and end-to-end test cases to ensure the accuracy, reliability, and robustness of parsing logic, covering all boundary conditions and edge cases. Optimize parsing performance and efficiency to handle large volumes of data. Troubleshoot and resolve parsing issues, adapting to changes in website structures and content formats. Contribute to the continuous improvement of our parsing infrastructure and methodologies, including the research and adoption of new AI-driven parsing techniques. Manage and deploy parsing solutions in a Linux environment. Collaborate with DevOps engineers to improve the scaling, deployment, and operational efficiency of parsing solutions. This role requires occasional weekend work as content changes are typically deployed on weekends, necessitating monitoring and immediate adjustments. Qualifications : Bachelor's degree in Computer Science or a closely related technical field is required. Experience in software development with a strong focus on data extraction and parsing. Proficiency in Python and its ecosystem, particularly with libraries for web scraping and parsing (e.g., Beautiful Soup, lxml, Scrapy, Playwright, Selenium). Demonstrated experience in building or parsing complex and unstructured HTML content into structured data formats. Understanding and practical experience with content categorization techniques (e.g., keyword extraction, rule-based classification, basic NLP concepts). Proven ability to design and implement effective alerting and notification systems (e.g., integrating with Slack, PagerDuty, email, custom dashboards). Attention to details with unit testing skills, with a meticulous approach to covering all boundary conditions, error cases, and edge scenarios. Experience working in a Linux environment, including shell scripting and command-line tools. Familiarity with data storage solutions (e.g., SQL databases) and data serialization formats (e.g., JSON, XML. Experience with version control systems (e.g., Git). Excellent problem-solving skills. Strong communication and collaboration abilities.

Posted 2 days ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Mohali, Punjab

On-site

Job Description- Flutter Developer Job Location: Mohali Experience- 1-2 years Mobile App Development Build responsive and scalable cross-platform mobile apps using Flutter (iOS & Android). Convert UI/UX designs into functional mobile app components. Use Flutter widgets effectively to craft clean and reusable code. API Integration Consume RESTful APIs and WebSockets to connect with backend services. Handle data parsing (JSON) and error handling gracefully. Performance Optimization Optimize application performance, responsiveness, and speed. Use tools like Flutter DevTools for debugging and profiling. Testing & Debugging Write unit, widget, and integration tests. Debug and resolve technical issues. App Store Deployment Prepare and publish apps to the Apple App Store and Google Play Store. Handle app versioning, code signing, and platform-specific build issues. Cross-functional Responsibilities · Knowledge of Backend Skills ( Nodejs, Php) is a Plus · Collaborate with designers, product managers, and QA engineers. · Review code (pull requests), suggest improvements, and mentor junior devs if needed. · Experience with Git and version control workflows. · Knowledge of containerization (Docker) is a plus. · Ability to troubleshoot both frontend and backend bugs. For further queries call/WhatsApp on 7743059799 #flutterDeveloper #IOS #Andriod #nodejs #php # MobileAppDevelopment #APIintegration. Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Application Question(s): How many years of experience do you have in flutter role? Do you have experience in Mobile App development? Do have experience in API integration? Location: Mohali, Punjab (Required) Work Location: In person

Posted 2 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

This is a test job post created for the purpose of evaluating and improving our internal recruitment system through Zoho Recruit. It allows the HR and marketing team to simulate the complete hiring journey, including job posting visibility, candidate application tracking, pipeline management, and automation of communications via WhatsApp, email, or integrated tools like Pabbly. This post is not intended for real hiring purposes. Through this test, we aim to verify system performance, data accuracy, resume parsing, automated responses, and cross-platform syncing. Team members may use this listing to submit dummy applications, upload trial resumes, and check whether each automation trigger works as intended. It also helps in assessing interview scheduling features and lead management. All actions and feedback from this test will guide us in setting up a seamless experience for real candidates in the future. Please do not treat this listing as a real job opportunity. Requirements No real qualifications required – this is a dummy listing Should help test resume upload, forms, or auto-tagging May be used by internal team only (HR / Tech / Marketing) Benefits Enables smooth hiring operations for real job roles Verifies Zoho Recruit + Pabbly integrations Improves candidate journey through testing 100% safe for trial runs – no real candidates will be processed

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

📍 Location: India (Remote) 💼 Equity Only 🧠 Effort = Reward We’re building something bold at the edge of Web3: a decentralized crypto copy trading and ghost trading platform that helps new users simulate and learn before they trade. Think real-time DEX insights, AI-generated trading signals, and structured liquidity analytics all feeding into a clean, gamified trading experience. We’re assembling a founding tech crew to bring this to life. 🔧 WHO WE’RE LOOKING FOR: A Solana-native builder who knows how to work with Solana RPCs, data parsing, on-chain indexing Can create or evolve a DEX analytics engine specializing in structured liquidity data (multiple pools, LP behavior, routing paths, etc.) Comfortable scraping, mining, and transforming on-chain data from top Solana DEXs (Orca, Raydium, Phoenix, etc.) Can think beyond dashboards and imagine real-time copy trading mechanics, ghost portfolio simulations, and custom trade signal feeds Must be India-based, able to work independently, and committed to confidentiality Bonus: Prior experience in crypto simulations, DeFi UI/UX, or AI modeling 💰 COMPENSATION This is a pure equity opportunity, not a salary role. We're looking for someone who wants skin in the game, someone who sees the upside and is ready to build alongside us on a "reward equals effort" basis. 🧩 THE STACK Solana RPC, Web3.js, Serum/Raydium APIs Node.js / TypeScript / Rust (optional) GraphQL indexing, TimescaleDB, The Graph (Solana equivalent) GitHub, Discord, Notion – async-first collaboration ⚡ WHAT YOU GET Equity in a platform that will onboard the next 10M crypto users A seat at the table to shape product direction, tokenomics, and architecture Built-in runway via Paywaz.com LLC and upcoming integrations Equity stake tied to milestone contributions 🧠 If you're a builder who thinks in models, sees beyond APIs, and lives to decode on-chain behavior we want to hear from you.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

About Us: YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B. Every day, our proprietary technology analyzes billions of alternative data points to uncover actionable insights across sectors like software, AI, cloud, e-commerce, ridesharing, and payments. Our data and research teams transform raw data into strategic intelligence, delivering accurate, timely, and deeply contextualized analysis that our customers—ranging from the world’s top investment funds to Fortune 500 companies—depend on to drive high-stakes decisions. From sourcing and licensing novel datasets to rigorous analysis and expert narrative framing, our teams ensure clients get not just data, but clarity and confidence. We operate globally with offices in the US (NYC, Austin, Miami, Mountain View), APAC (Hong Kong, Shanghai, Beijing, Guangzhou, Singapore), and India. Our award-winning, people-centric culture—recognized by Inc. as a Best Workplace for three consecutive years—emphasizes transparency, ownership, and continuous mastery. What It’s Like to Work at YipitData: YipitData isn’t a place for coasting—it’s a launchpad for ambitious, impact-driven professionals. From day one, you’ll take the lead on meaningful work, accelerate your growth, and gain exposure that shapes careers. Why Top Talent Chooses YipitData: Ownership That Matters: You’ll lead high-impact projects with real business outcomes Rapid Growth: We compress years of learning into months Merit Over Titles: Trust and responsibility are earned through execution, not tenure Velocity with Purpose: We move fast, support each other, and aim high—always with purpose and intention If your ambition is matched by your work ethic—and you're hungry for a place where growth, impact, and ownership are the norm—YipitData might be the opportunity you’ve been waiting for. About The Role: We are seeking a Web Scraping Engineer to join our growing engineering team. In this hands-on role, you’ll take ownership of designing, building, and maintaining robust web scrapers that power critical reports and customer experiences across our organization. You will work on complex, high-impact scraping challenges and collaborate closely with cross-functional teams to ensure our data ingestion processes are resilient, efficient, and scalable, while delivering high-quality data to our products and stakeholders. As Our Web Scraping Engineer You Will: Refactor and Maintain Web Scrapers Overhaul existing scraping scripts to improve reliability, maintainability, and efficiency. Implement best coding practices (clean code, modular architecture, code reviews, etc.) to ensure quality and sustainability. Implement Advanced Scraping Techniques Utilize sophisticated fingerprinting methods (cookies, headers, user-agent rotation, proxies) to avoid detection and blocking. Handle dynamic content, navigate complex DOM structures, and manage session/cookie lifecycles effectively. Collaborate with Cross-Functional Teams Work closely with analysts and other stakeholders to gather requirements, align on targets, and ensure data quality. Provide support, documentation, and best practices to internal stakeholders to ensure effective use of our web scraped data in critical reporting workflows. Monitor and Troubleshoot Develop robust monitoring solutions, alerting frameworks to quickly identify and address failures. Continuously evaluate scraper performance, proactively diagnosing bottlenecks and scaling issues. Drive Continuous Improvement Propose new tooling, methodologies, and technologies to enhance our scraping capabilities and processes. Stay up to date with industry trends, evolving bot-detection tactics, and novel approaches to web data extraction. This is a fully-remote opportunity based in India. Standard work hours are from 11am to 8pm IST, but there is flexibility here. You Are Likely To Succeed If: Effective communication in English with both technical and non-technical stakeholders. You have a track record of mentoring engineers and managing performance in a fast-paced environment. 3+ years of experience with web scraping frameworks (e.g., Selenium, Playwright, or Puppeteer). Strong understanding of HTTP, RESTful APIs, HTML parsing, browser rendering, and TLS/SSL mechanics. Expertise in advanced fingerprinting and evasion strategies (e.g., browser fingerprint spoofing, request signature manipulation). Deep experience managing cookies, headers, session states, and proxy rotations, including the deployment of both residential and data center proxies. Experience with logging, metrics, and alerting to ensure high availability. Troubleshooting skills to optimize scraper performance for efficiency, reliability, and scalability. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life, and we mean it. We offer flexible work hours, flexible vacation, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. See more on our high-impact, high-opportunity work environment above! We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

About Us: YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B. Every day, our proprietary technology analyzes billions of alternative data points to uncover actionable insights across sectors like software, AI, cloud, e-commerce, ridesharing, and payments. Our data and research teams transform raw data into strategic intelligence, delivering accurate, timely, and deeply contextualized analysis that our customers—ranging from the world’s top investment funds to Fortune 500 companies—depend on to drive high-stakes decisions. From sourcing and licensing novel datasets to rigorous analysis and expert narrative framing, our teams ensure clients get not just data, but clarity and confidence. We operate globally with offices in the US (NYC, Austin, Miami, Mountain View), APAC (Hong Kong, Shanghai, Beijing, Guangzhou, Singapore), and India. Our award-winning, people-centric culture—recognized by Inc. as a Best Workplace for three consecutive years—emphasizes transparency, ownership, and continuous mastery. What It’s Like to Work at YipitData: YipitData isn’t a place for coasting—it’s a launchpad for ambitious, impact-driven professionals. From day one, you’ll take the lead on meaningful work, accelerate your growth, and gain exposure that shapes careers. Why Top Talent Chooses YipitData: Ownership That Matters: You’ll lead high-impact projects with real business outcomes Rapid Growth: We compress years of learning into months Merit Over Titles: Trust and responsibility are earned through execution, not tenure Velocity with Purpose: We move fast, support each other, and aim high—always with purpose and intention If your ambition is matched by your work ethic—and you're hungry for a place where growth, impact, and ownership are the norm—YipitData might be the opportunity you’ve been waiting for. About The Role: We are seeking a Web Scraping Engineer to join our growing engineering team. In this hands-on role, you’ll take ownership of designing, building, and maintaining robust web scrapers that power critical reports and customer experiences across our organization. You will work on complex, high-impact scraping challenges and collaborate closely with cross-functional teams to ensure our data ingestion processes are resilient, efficient, and scalable, while delivering high-quality data to our products and stakeholders. As Our Web Scraping Engineer You Will: Refactor and Maintain Web Scrapers Overhaul existing scraping scripts to improve reliability, maintainability, and efficiency. Implement best coding practices (clean code, modular architecture, code reviews, etc.) to ensure quality and sustainability. Implement Advanced Scraping Techniques Utilize sophisticated fingerprinting methods (cookies, headers, user-agent rotation, proxies) to avoid detection and blocking. Handle dynamic content, navigate complex DOM structures, and manage session/cookie lifecycles effectively. Collaborate with Cross-Functional Teams Work closely with analysts and other stakeholders to gather requirements, align on targets, and ensure data quality. Provide support, documentation, and best practices to internal stakeholders to ensure effective use of our web scraped data in critical reporting workflows. Monitor and Troubleshoot Develop robust monitoring solutions, alerting frameworks to quickly identify and address failures. Continuously evaluate scraper performance, proactively diagnosing bottlenecks and scaling issues. Drive Continuous Improvement Propose new tooling, methodologies, and technologies to enhance our scraping capabilities and processes. Stay up to date with industry trends, evolving bot-detection tactics, and novel approaches to web data extraction. This is a fully-remote opportunity based in India. Standard work hours are from 11am to 8pm IST, but there is flexibility here. You Are Likely To Succeed If: Effective communication in English with both technical and non-technical stakeholders. You have a track record of mentoring engineers and managing performance in a fast-paced environment. 3+ years of experience with web scraping frameworks (e.g., Selenium, Playwright, or Puppeteer). Strong understanding of HTTP, RESTful APIs, HTML parsing, browser rendering, and TLS/SSL mechanics. Expertise in advanced fingerprinting and evasion strategies (e.g., browser fingerprint spoofing, request signature manipulation). Deep experience managing cookies, headers, session states, and proxy rotations, including the deployment of both residential and data center proxies. Experience with logging, metrics, and alerting to ensure high availability. Troubleshooting skills to optimize scraper performance for efficiency, reliability, and scalability. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life, and we mean it. We offer flexible work hours, flexible vacation, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. See more on our high-impact, high-opportunity work environment above! We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Your Next Challenge: Reinvent Healthcare with Code & AI We believe healthcare deserves better — better efficiency, better tech, and better outcomes. And we believe that happens when brilliant minds like yours meet bold problems like ours . At Jorie AI , we’re building automation and AI solutions that transform the painful inefficiencies of Revenue Cycle Management (RCM) into seamless, intelligent workflows. We already help hospitals and providers save millions, and now we’re ready to take it to the next level. We need a doer, dreamer, and builder who thrives at the intersection of: 🧠 Artificial Intelligence 💻 Python Automation 🩺 Healthcare RCM What You’ll Actually Do Break down messy RCM processes, then rebuild them with smart, scalable automation. Code in Python like it’s second nature — building bots, APIs, and backends to make healthcare run smarter. Apply AI/ML to solve real-world problems like intelligent coding, claim denials prediction, document parsing, and AR prioritization etc. Work with a cross-functional team of product managers, engineers, and healthcare domain experts who geek out just as much as you do. Stay ahead of the curve — you’ll have the freedom to innovate, experiment, and iterate. Skills we are looking for ✨ Someone who loves to solve hard problems with smart code. ✨ Fluent in Python (not just “comfortable”). ✨ Knows healthcare RCM processes inside-out (or at least has solid experience working in RCM environments). ✨ Has played with AI/ML tools, APIs, or models — and knows how to make them work in production. ✨ Curious. Relentless. Collaborative. You ship fast and learn faster. What’s In It for You? 🎯 Work that actually matters — fixing a broken healthcare system, one automated workflow at a time. 🎯 Remote-first, flexible work culture. 🎯 A fast-moving team where your ideas shape the product. 🎯 A front-row seat in the healthtech revolution.

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

On-site

About Us Adfinity Global Solutions is a technology driven company focused on delivering effective digital display solutions. We design and deploy outdoor, indoor and transparent displays that help brands reach people with clarity and purpose. We are now expanding into the entertainment space, guided by the same principles that define our work. Every solution we build is meant to serve real needs, create real engagement and reflect the trust our clients place in us. As we grow, we are looking for individuals who share this mindset and are ready to contribute meaningfully to what we are building. Visit www.adfinityglobal.com for more details. Role Overview We are looking for a Flutter developer who knows their way around Dart and cares about writing clean, reliable code. You should be comfortable working on real production apps, handling state, structuring things well and making sure everything runs smoothly on both Android and iOS. You will be working closely with the design, product and backend teams to bring ideas to life and make sure the app feels right in the hands of our users. Key Responsibilities Build and maintain scalable, modular Flutter applications for both Android and iOS. Work with Riverpod (including riverpod_generator) and Freezed to implement clean, immutable, reactive architecture. Integrate with RESTful APIs using robust error handling and state management practices. Implement custom UI/UX, animations, and transitions based on design mocks. Optimize app performance using profiling tools and asynchronous programming best practices. Use tools like build runner, linters, and custom annotations to maintain clean code and architecture. Work closely with backend and design teams to ensure accurate data flow and UI/UX precision. Debug platform-specific issues on iOS and Android and ensure smooth deployment pipelines. Integrate third-party packages (e.g., cached_network_image, flutter_html, etc.) and native SDKs as needed. Maintain and improve collaborative workflows with Git and code reviews. Required Skills and Experience 3+ years of professional Flutter development experience. Strong expertise in Dart and Flutter SDK. Hands-on experience with Riverpod for state management and code generation using riverpod_generator. Experience with Freezed for building immutable models. Proficient in REST API integration, JSON parsing, and structured error handling. Familiarity with dependency injection, modular code structure, and clean architecture principles. Understanding of Flutter performance profiling tools and optimization techniques. Prior work with custom UI design, animations, and third-party libraries. Solid grasp of Git and collaborative version control practices. Experience with build runner, annotations, linter rules, and project structuring. Bonus Points For Experience with Firebase services such as Analytics, Crashlytics, and Messaging. Hands-on experience with app store deployment, platform-specific debugging, and resolving Android/iOS build issues. Contributions to open source or Flutter community plugins. A good eye for design, transitions, and micro-interactions. What We Offer An opportunity to be part of a growing platform in the entertainment space. A focused team where your work is seen and matters. Clear ownership over features that reach real users every day. The space to learn, try things and grow through hands-on work. Experience: 3+ years Salary: Based on candidate's experience and current CTC To Apply Send your resume, GitHub/portfolio/app links, and a short note about a Flutter feature or UI you’re proud of to haritha@adfinityglobal.com Subject: Flutter Frontend Engineer – Application Job Types: Full-time, Permanent Benefits: Health insurance Work Location: In person

Posted 3 days ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Gurgaon

On-site

Project description We are looking for a skilled Document AI / NLP Engineer to develop intelligent systems that extract meaningful data from documents such as PDFs, scanned images, and forms. In this role, you will build document processing pipelines using OCR and NLP technologies, fine-tune ML models for tasks like entity extraction and classification, and integrate those solutions into scalable cloud-based applications. You will collaborate with cross-functional teams to deliver high-performance, production-ready pipelines and stay up to date with advancements in the document understanding and machine learning space. Responsibilities Design, build, and optimize document parsing pipelines using tools like Amazon Textract, Azure Form Recognizer, or Google Document AI. Perform data preprocessing, labeling, and annotation for training machine learning and NLP models. Fine-tune or train models for tasks such as Named Entity Recognition (NER), text classification, and layout understanding using PyTorch, TensorFlow, or HuggingFace Transformers. Integrate document intelligence capabilities into larger workflows and applications using REST APIs, microservices, and cloud components (e.g., AWS Lambda, S3, SageMaker). Evaluate model and OCR accuracy, applying post-processing techniques or heuristics to improve precision and recall. Collaborate with data engineers, DevOps, and product teams to ensure solutions are robust, scalable, and meet business KPIs. Monitor, debug, and continuously enhance deployed document AI solutions. Maintain up-to-date knowledge of industry trends in OCR, Document AI, NLP, and machine learning. Skills Must have 4-5 years of hands-on experience in machine learning, document AI, or NLP-focused roles. Strong expertise in OCR tools and frameworks, especially Amazon Textract, Azure Form Recognizer, Google Document AI, or open-source tools like Tesseract, LayoutLM, or PaddleOCR. Solid programming skills in Python and familiarity with ML/NLP libraries: scikit-learn, spaCy, transformers, PyTorch, TensorFlow, etc. Experience working with structured and unstructured data formats, including PDF, images, JSON, and XML. Hands-on experience with REST APIs, microservices, and integrating ML models into production pipelines. Working knowledge of cloud platforms, especially AWS (S3, Lambda, SageMaker) or their equivalents. Understanding of NLP techniques such as NER, text classification, and language modeling. Strong debugging, problem-solving, and analytical skills. Clear verbal and written communication skills for technical and cross-functional collaboration. Nice to have N/A Other Languages English: B2 Upper Intermediate Seniority Senior Gurugram, India Req. VR-116250 AI/ML BCM Industry 29/07/2025 Req. VR-116250

Posted 3 days ago

Apply

4.0 years

2 - 6 Lacs

Ahmedabad

On-site

Position: Android Developer (CE48SF RM 3425) Shift timing (if any): General Shift Work Mode – EIC office/ Hybrid Minimum Relevant Experience: 4+ years Education Required: Bachelor’s / Masters / PhD : B.E Computers, MCA is preferable Must have: XAML for UI development., RESTful APIs, JSON/XML parsing, networking on Android, Debugging and Troubleshooting, mobile application lifecycle (Android), JAVA Kotlin Good to have: Bluetooth/BLE programming, Java, C, C++ Overview We are looking for a talented and motivated Android Developer to join our innovative software development team. The ideal candidate should have a strong passion for mobile application development and a proven track record of building high-quality native Android applications. You will collaborate with cross-functional teams to design, develop, and deploy Android solutions that align with our product vision and business goals. Key Responsibilities Design, develop, and maintain native Android applications using Kotlin and/or Java . Collaborate with product managers, designers, and fellow developers to define, design, and implement new features. Write clean, maintainable, and scalable code following Android development best practices. Optimize application performance, responsiveness, and usability. Participate in Agile development processes: sprint planning, daily stand-ups, retrospectives. Diagnose and resolve bugs, crashes, and performance issues. Conduct code reviews and support internal development improvements. Implement security and data protection practices across the app. Required Skills & Qualifications Strong experience in native Android development using Kotlin and/or Java . Solid understanding of Android SDK , Jetpack components , and Material Design . Experience working with MVVM , MVP , or Clean Architecture patterns. Proficiency in integrating RESTful APIs and handling JSON/XML data. Experience with Room , SQLite , or other local storage solutions. Hands-on experience publishing apps to the Google Play Store . Familiarity with Android lifecycle, background processing, and threading. Experience with platform-specific features such as camera , GPS , sensors , and notifications . Strong debugging and performance tuning skills. Good communication and documentation abilities. Ability to work both independently and collaboratively in a team. Nice to Have Experience with Bluetooth/BLE integration. Familiarity with Firebase services (Authentication, Cloud Messaging, Analytics). Experience working with CI/CD pipelines and tools like Fastlane or GitHub Actions . Exposure to Jetpack Compose and willingness to adopt it. Knowledge of Gradle , Proguard , and general mobile app optimization techniques. Understanding of Unit Testing and UI Testing using tools like JUnit , Espresso , or Mockito . Familiarity with UML diagrams , flow charts, and technical documentation. Tools & Technologies Languages : Kotlin, Java Development Tools : Android Studio, ADB, Android Emulator Version Control : Git, Bitbucket, GitHub Project Management : JIRA, Confluence Testing Tools : Espresso, JUnit, Mockito, Firebase Test Lab Build & Release : Gradle, Proguard, Fastlane, Play Consol ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Ahmedabad Experience: 4-8 years Notice period: 0-15 days

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies