ISITCA PRIVATE LIMITED

4 Job openings at ISITCA PRIVATE LIMITED
Data Engineer New Delhi,Delhi,India 5 years Not disclosed On-site Full Time

About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month Show more Show less

Data Engineer delhi,delhi,india 2 - 5 years INR Not disclosed On-site Full Time

About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involvingPDFs, processed via OCR and stored in MySQL/PostgreSQL. You'll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 25 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month

AI Engineer new delhi,delhi,india 3 years None Not disclosed On-site Full Time

Key Responsibilities: Design, develop, and deploy agentic AI systems capable of autonomous decision-making, reasoning, and executing multi-step tasks. Build and fine-tune LLM-powered applications using frameworks like LangChain, LlamaIndex, or Semantic Kernel. Integrate LLMs into real-world workflows via APIs and tools (e.g., OpenAI, Hugging Face, Anthropic, Mistral). Architect and implement scalable pipelines for model orchestration, retrieval-augmented generation (RAG), and memory/long-context strategies. Develop robust interfaces between LLM agents and external tools, APIs, and databases. Experiment with prompt engineering, few-shot learning, and model evaluation techniques to optimise agent behaviour. Collaborate with cross-functional teams to understand business use cases and deliver end-to-end solutions. Required Skills & Experience: 3+ years of experience in AI/ML, with a strong track record of freelance or consulting work. Proven experience with LLMs, prompt engineering, and agentic frameworks. Proficient in Python and familiar with modern LLM toolchains (e.g., LangChain, OpenAI Function Calling, ReAct, AutoGPT, AgentOps). Experience building autonomous or semi-autonomous agents capable of reasoning and planning. Solid understanding of vector stores (e.g., Pinecone, FAISS, Weaviate) and RAG pipelines. Knowledge of cloud platforms (Azure, AWS, GCP) and containerization (Docker, Kubernetes). Ability to work independently, manage clients, and deliver on fast-paced timelines. Job Types: Full-time, Permanent, Contractual / Temporary

Data Engineer (Scraping & Pipeline Stabilization) new delhi,delhi,india 5 years None Not disclosed On-site Full Time

About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Minimum 3 years experience must be a graduate: b tech preferred / BCA/ MCA /BSc /MSc Mandatory keywords (must have skills) scraping python selenium NumPy Pandas Optional Keywords: (good to have the following skills) Beautiful soup MySQL Large Language Model ( LLM) Machine Learning Natural Language Processing (NLP) GitHub Django