Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
15 - 25 Lacs
indore
On-site
Job description Location: Indore Job Type: Full-time Experience Required: 8+ years Department: Technology / Mobile Development Job Summary : We are seeking a passionate and skilled Swift Developer with 8+ years of experience in iOS application development. The ideal candidate should have hands-on experience building mobile applications using Swift and a strong understanding of Apple’s ecosystem. You will collaborate with cross-functional teams to develop, test, and deliver high-performance iOS applications. Key Responsibilities: Develop and maintain advanced iOS applications using Swift. Collaborate with UX/UI designers, product managers, and backend developers. Integrate APIs and third-party services into applications. Write clean, scalable, and well-documented code. Debug and resolve issues, improve performance, and stability. Keep up to date with the latest iOS trends, technologies, and best practices. Participate in code reviews and contribute to technical discussions. Requirements: 8+ years of experience in Swift and iOS app development. Strong knowledge of Xcode, UIKit, CoreData, and other iOS frameworks. Familiarity with RESTful APIs, JSON parsing, and third-party libraries. Good understanding of mobile UI/UX standards. Experience with version control systems like Git. Ability to write unit and UI tests to ensure robustness. Strong analytical and problem-solving skills. Bachelor’s degree in Computer Science, Engineering, or a related field Job Type: Full-time Pay: ₹1,506,698.33 - ₹2,560,238.81 per year Benefits: Health insurance Provident Fund Location: Indore, Madhya Pradesh (Required) Work Location: In person
Posted Just now
0 years
0 Lacs
maharashtra, india
Remote
Hiring Freelance Web Scraping / Parsing Expert We are looking for a talented freelancer to join our project focused on data scraping & parsing for sports and iGaming platforms. Responsibilities Build and maintain parsers & scrapers for sports data websites Handle anti-bot mechanisms (cloudflare, captchas, rotating IPs) Deliver clean, structured data (JSON, DB-ready format) Optimize for reliability & speed (near real-time parsing) Requirements Strong experience with Selenium / Playwright / Puppeteer Familiarity with Python or Node.js for automation Experience with proxy rotation, headless browser handling Able to debug and maintain scrapers proactively Engagement Freelance (remote) Flexible hours, but must be responsive for urgent fixes Paid per project or on monthly retainer (to be discussed) If you are experienced with large-scale scraping or sports data parsing and want to work on a challenging and fast-paced project, feel free to DM me or comment below.
Posted 1 day ago
6.0 years
0 Lacs
udaipur, rajasthan, india
On-site
Job Title: Datadog Observability & Automation Specialist Location: Pune /Mumbai/ Noida/Udaipur Job Type: Full-Time/Hybrid Experience : 7-15yrs Job Summary: We are seeking a skilled Datadog Observability & Automation Specialist with hands-on experience in building observability practices and implementing end-to-end automation, including AI and GenAI capabilities. The ideal candidate will be responsible for configuring and optimizing observability platforms to deliver actionable insights into system performance and reliability across various industry use cases. Key Responsibilities: Design, implement, and maintain heterogeneous observability solutions using infrastructure, logs, synthetic monitoring, automation, AI, and GenAI. Create and manage dashboards, monitors, alerts, service maps, and user interfaces. Collaborate with DevOps, Development, and Security teams to define and maintain SLIs, SLOs, and SLAs. Develop integrations between observability platforms and other systems (e.g., hybrid cloud, on-prem data centers, end-user assets, Kubernetes, Terraform, CI/CD tools). Optimize alerting mechanisms to reduce false positives and improve incident response. Provide support during incidents, including root cause analysis and post-mortem reviews. Conduct training sessions for internal teams on effective platform usage. Required Skills and Qualifications: 6+ years of experience in development, automation, system monitoring, and DevOps. 3+ years of hands-on experience with advanced automation and observability platforms such as Dynatrace, Datadog, AppDynamics, New Relic, Zabbix, ELK(Elasticsearch, Logstash, and Kibana.), AI/GenAI, and Machine Learning. Strong understanding of infrastructure components including cloud platforms (AWS, Azure, GCP), containers (Docker, Kubernetes), networking, and operating systems. Proficiency in scripting languages such as Python, Bash, or Shell. Experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitHub Actions, Terraform, Packer). Familiarity with log collection, parsing, and automation using observability platforms. Strong analytical and problem-solving skills with a product-oriented mindset. Preferred Qualifications: Certifications in observability platforms (e.g., Datadog Certified Monitoring Professional, Dynatrace, AppDynamics, ELK). Experience with additional monitoring tools (e.g., Prometheus, Grafana, New Relic, Nagios, ManageEngine). Familiarity with ITIL processes and incident management tools (e.g., PagerDuty, ServiceNow, Why Join BXI Technologies? Lead innovation in AI, Cloud, and Cybersecurity with top-tier partners. Be part of a forward-thinking team driving digital transformation. Access to cutting-edge technologies and continuous learning opportunities. Competitive compensation and performance-based incentives. Flexible and dynamic work environment based in India. About BXI Tech BXI Tech is a purpose-driven technology company, backed by private equity and focused on delivering innovation in engineering, AI, cybersecurity, and cloud solutions. We combine deep tech expertise with a commitment to creating value for both businesses and communities. Our ecosystem includes BXI Ventures, which invests across technology, healthcare, real estate, and hospitality , and BXI Foundation, which leads impactful initiatives in education, healthcare, and care homes . Together, we aim to drive sustainable growth and meaningful social impact .
Posted 1 day ago
3.0 years
0 Lacs
gurugram, haryana, india
On-site
Roles and Responsibilities Build and maintain scalable, fault-tolerant data pipelines to support GenAI and analytics workloads across OCR, documents, and case data. Manage ingestion and transformation of semi-structured legal documents (PDF, Word, Excel) into structured formats. Enable RAG workflows by processing data into chunked, vectorized formats with metadata. Handle large-scale ingestion from multiple sources into cloud-native data lakes (S3, GCS), data warehouses (BigQuery, Snowflake), and PostgreSQL. Automate pipelines using orchestration tools like Airflow/Prefect , including retry logic, alerting, and metadata tracking. Collaborate with ML Engineers to ensure data availability, traceability, and performance for inference and training pipelines. Implement data validation and testing frameworks using Great Expectations or dbt . Integrate OCR pipelines and post-processing outputs for embedding and document search. Design infrastructure for streaming vs batch data needs and optimize for cost, latency, and reliability. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or equivalent. 3+ years of experience in building distributed data pipelines and managing multi-source ingestion. Proficiency with Python , SQL , and data tools like Pandas, PySpark. Experience working with data orchestration tools (Airflow, Prefect), and file formats like Parquet, Avro, JSON. Hands-on experience with cloud storage/data warehouse systems (S3, GCS, BigQuery, Redshift). Understanding of GenAI and vector database ingestion pipelines is a strong plus. Bonus: Experience with OCR tools (Tesseract, Google Document AI), PDF parsing libraries (PyMuPDF), and API-based document processors.
Posted 1 day ago
3.0 - 10.0 years
0 Lacs
chennai, tamil nadu, india
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
3.0 - 10.0 years
0 Lacs
hyderabad, telangana, india
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
4.0 years
0 Lacs
bengaluru, karnataka, india
Remote
At Optiv, we’re on a mission to help our clients make their businesses more secure. We’re one of the fastest growing companies in a truly essential industry. In your role at Optiv, you’ll be inspired by a team of the brightest business and technical minds in cybersecurity. We are passionate champions for our clients and know from experience that the best solution for our clients’ needs come from working hard together. As part of our team, your voice matters, and you will do important work that has impact, on people, businesses, and nations. Our industry and our company move fast, and you can be sure that you will always have room to learn and grow. We’re proud of our team and the important work we do to build confidence for a more connected world. As a Threat Management Platform Developer, you’ll play a key role in supporting and enhancing our cybersecurity reporting ecosystem by managing and customizing the PlexTrac platform. You’ll work closely with security teams to streamline assessment workflows, develop actionable reporting templates, and drive platform integrations that align with internal processes. This role is ideal for someone with 1–4 years of experience in cybersecurity or platform development, who’s passionate about operational efficiency, automation, and offensive security tooling. Who We Are Looking For You’ll serve as a platform technical enabler, responsible for managing, enhancing, and evolving platform to support efficient cybersecurity reporting and threat exposure workflows. This includes: Optimizing platform usage by identifying repetitive reporting and workflow processes within security teams that can be centralized or automated using PlexTrac. Collaborating with Penetration Testers to translate technical assessment data into standardized, actionable reports and dashboards. Enhancing platform capabilities by developing and customizing templates, integrations, and automations using PlexTrac APIs and scripting frameworks. Conducting research to identify innovative features and integrations that support continuous threat exposure management. Supporting offensive security teams by integrating outputs from tools such as Burp Suite, Nessus, Nmap, Metasploit, and custom scripts into PlexTrac workflows. Building parsers and automation scripts to ingest and normalize data from red team engagements, vulnerability scans, and threat simulations. How You’ll Make An Impact 1 - 4 years’ experience of implementation and development of cybersecurity reporting or threat exposure management platform etc. Proficiency in Python and relevant libraries (e.g., gingerit, Pandas, Requests). Experience with RESTful APIs, data parsing, and JSON/XML. Familiarity with security tools, scanners (like Nessus or Burp Suite), and threat intelligence feeds is a plus. Strong communication skills to work across technical and non-technical teams. A passion for improving security workflows and a curiosity for automation and tooling. Ability to create efficient, well-documented, and reusable scripts and tools. Strong problem-solving skills and the ability to translate requirements into scalable solutions. What You Can Expect From Optiv A company committed to championing Diversity, Equality, and Inclusion through our Employee Resource Groups. Work/life balance Professional training resources Creative problem-solving and the ability to tackle unique, complex projects Volunteer Opportunities. “Optiv Chips In” encourages employees to volunteer and engage with their teams and communities. The ability and technology necessary to productively work remotely/from home (where applicable) EEO Statement Optiv is an equal opportunity employer. All qualified applicants for employment will be considered without regard to race, color, religion, sex, gender identity or expression, sexual orientation, pregnancy, age 40 and over, marital status, genetic information, national origin, status as an individual with a disability, military or veteran status, or any other basis protected by federal, state, or local law. Optiv respects your privacy. By providing your information through this page or applying for a job at Optiv, you acknowledge that Optiv will collect, use, and process your information, which may include personal information and sensitive personal information, in connection with Optiv’s selection and recruitment activities. For additional details on how Optiv uses and protects your personal information in the application process, click here to view our Applicant Privacy Notice. If you sign up to receive notifications of job postings, you may unsubscribe at any time.
Posted 1 day ago
8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Cybersecurity at Providence is responsible for appropriately protecting all information relating to its caregivers and affiliates, as well as protecting its confidential business information (including information relating to its caregivers, affiliates, and patients) What will you be responsible for? Lead the design and implementation of data ingestion from diverse sources, various mechanisms for integration and normalization of logs. Extension of pre-built UDMs in and creation of custom parsers where required for log sources. Integration of SIEM with other security capabilities and tools such as SOAR, EDR, threat intelligence platform, and ticketing systems. Write custom actions, scripts and/or integrations to extend SIEM platform functionality. Monitor performance and perform timely actions to scale SIEM deployment, especially in a very high-volume security environment. Testing and deployment of newly created and migrated assets such as rules, playbooks, alerts, dashboards etc. Lead and oversee deployment, operation, and maintenance of the global EDR platform. What would your work week look like? Design and implement solutions to handle alert fatigue encountered in SIEM correlation. Guide on building or maturing cloud security programs and the implementation of tools and approaches used for improving cloud security. Debug and solve issues in ingestion, parsing, normalization of data etc. Develop custom queries, detection rules, workbooks, and automation playbooks to improve threat detection and response efficiency. Collaborate with threat analysts and incident response teams to triage, investigate, and respond to security alerts and incidents. Provide technical guidance in security best practices, incident response procedures, and threat hunting using security tools. Coordinate with service delivery managers, management, engineering, maintenance, and operational support teams to ensure timely delivery. Create and maintain documentation for SIEM & EDR configurations, procedures, and playbooks. Provide support response to other security teams in respect to the EDR platform. Who are we looking for? Bachelor s degree in related filed, to include computer science, or equivalent combination of education and experience. 8+ years’ experience in leading projects and delivering technical solutions related to security. Experience architecting, developing, or maintaining SIEM and SOAR platforms & secure Cloud solutions. Strong understanding of SIEM & EDR solutions such as Splunk, Crowdstrike, LogRhythm and Sentinel. Good understanding of log collection methodologies and aggregation techniques such as Syslog-NG, syslog, Nxlog, Windows Event Forwarding. Good understanding of MITRE ATT&CK framework, kill chains and other attack models. Proficiency in scripting languages (e.g., Python, PowerShell) for automation purposes. Strong verbal and written communication skills and the ability to develop high-quality. Relevant certifications (e.g., CISSP, CCNP Security) are a plus.
Posted 1 day ago
8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Cybersecurity at Providence is responsible for appropriately protecting all information relating to its caregivers and affiliates, as well as protecting its confidential business information (including information relating to its caregivers, affiliates, and patients) What will you be responsible for? Lead the design and implementation of data ingestion from diverse sources, various mechanisms for integration and normalization of logs. Extension of pre-built UDMs in and creation of custom parsers where required for log sources. Integration of SIEM with other security capabilities and tools such as SOAR, EDR, threat intelligence platform, and ticketing systems. Write custom actions, scripts and/or integrations to extend SIEM platform functionality. Monitor performance and perform timely actions to scale SIEM deployment, especially in a very high-volume security environment. Testing and deployment of newly created and migrated assets such as rules, playbooks, alerts, dashboards etc. Lead and oversee deployment, operation, and maintenance of the global EDR platform. What would your work week look like? Design and implement solutions to handle alert fatigue encountered in SIEM correlation. Guide on building or maturing cloud security programs and the implementation of tools and approaches used for improving cloud security. Debug and solve issues in ingestion, parsing, normalization of data etc. Develop custom queries, detection rules, workbooks, and automation playbooks to improve threat detection and response efficiency. Collaborate with threat analysts and incident response teams to triage, investigate, and respond to security alerts and incidents. Provide technical guidance in security best practices, incident response procedures, and threat hunting using security tools. Coordinate with service delivery managers, management, engineering, maintenance, and operational support teams to ensure timely delivery. Create and maintain documentation for SIEM & EDR configurations, procedures, and playbooks. Provide support response to other security teams in respect to the EDR platform. Who are we looking for? Bachelor s degree in related filed, to include computer science, or equivalent combination of education and experience. 8+ years’ experience in leading projects and delivering technical solutions related to security. Experience architecting, developing, or maintaining SIEM and SOAR platforms & secure Cloud solutions. Strong understanding of SIEM & EDR solutions such as Splunk, Crowdstrike, LogRhythm and Sentinel. Good understanding of log collection methodologies and aggregation techniques such as Syslog-NG, syslog, Nxlog, Windows Event Forwarding. Good understanding of MITRE ATT&CK framework, kill chains and other attack models. Proficiency in scripting languages (e.g., Python, PowerShell) for automation purposes. Strong verbal and written communication skills and the ability to develop high-quality. Relevant certifications (e.g., CISSP, CCNP Security) are a plus.
Posted 1 day ago
3.0 years
5 - 8 Lacs
thiruvananthapuram
On-site
3 - 5 Years 2 Openings Trivandrum Role description Sr. Splunk SME/Enterprise Monitoring Engineer Splunk SME/Enterprise Monitoring Engineer 3+ years of hands-on experience with Splunk Enterprise as an admin, architect, or engineer. • Experience designing and managing large-scale, multi-site Splunk deployments. • Strong skills in SPL (Search Processing Language), dashboard design, and ing strategies. • Familiarity with Linux systems, scripting (e.g., Bash, Python), and APIs. • Experience with enterprise monitoring tools and integration with Splunk (e.g., AppDynamics, Dynatrace, Nagios, Zabbix, etc.). • Understanding of logging, metrics, and tracing in modern environments (on-prem and cloud). • Strong understanding of network protocols, system logs, and application telemetry. • Serve as the SME for Splunk architecture, deployment, and configuration across the enterprise. • Maintain and optimize Splunk infrastructure, including indexers, forwarders, search heads, and clusters. • Develop and manage custom dashboards, s, saved searches, and visualizations. • Implement and tune log ingestion pipelines using Splunk Universal Forwarders, HTTP Event Collector, and other data inputs. • Ensure high availability, scalability, and performance of the Splunk environment. • Creating dashboards, Reports, s, Advance Splunk Search, Visualization, log parsing and external table lookups • Expertise with SPL (Search Processing Language ) and understanding of Splunk architecture, including configuration files. • Wide Experience in monitoring and troubleshooting applications using tools like AppDynamics, Splunk, Grafana, Argos ,OTEL, etc. to build observability for large-scale microservice deployments. • Creating dashboards for various applications to monitor health, network issues and configure s. • Excellent problem-solving, triaging, and debugging skills in large-scale distributed systems • Establishing and documenting run books and guidelines for using the multi-cloud infrastructure and microservices platform. • Experience in optimized search queries using summary indexing. • Solid knowledge and experience in monitoring the Splunk infrastructure. • Develop a long-term strategy and roadmap for AI/ML tooling to support the AI capabilities across the Splunk portfolio. • Diagnose and resolve network-related issues affecting CI/CD pipelines, debug DNS, firewall, proxy, and SSL/TLS problems, and use tools like tcpdump, curl, and netstat for proactive maintenance. Enterprise Monitoring & Observability • Design and implement holistic enterprise monitoring solutions integrating Splunk with tools like AppDynamics, Dynatrace, Prometheus, Grafana, SolarWinds, or others. • Collaborate with application, infrastructure, and security teams to define monitoring KPIs, SLAs, and thresholds. • Build end-to-end visibility into application performance, system health, and user experience. • Integrate Splunk with ITSM platforms (e.g., ServiceNow) for event and incident management automation. Operations, Troubleshooting & Optimization • Perform data onboarding, parsing, and field extraction for structured and unstructured data sources. • Support incident response and root cause analysis using Splunk for troubleshooting and forensics. • Regularly audit and optimize search performance, data retention policies, and index lifecycle management. • Create runbooks, documentation, and SOPs for Splunk and monitoring tool usage. Skills Splunk,Devops Tools,Bash,DEVOPS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 days ago
0 years
0 Lacs
india
On-site
Role purpose Are you passionate about cutting-edge technology and big data? At Prevalent AI, we're at the forefront of building innovative solutions to handle petabyte-scale data. As a Data Engineer, you will play a pivotal role in supporting the development of key data engineering components for our Data Fabric and Exposure Management products. In this exciting role, you’ll work with open-source big data technologies to help us collect, transport, transform, and ingest massive amounts of data in a distributed architecture. If you thrive in an environment that encourages continuous learning, embraces modern technology, and values the power of data, this is the perfect opportunity for you. Key accountabilities The ideal candidate would be a self-motivated individual with strong technology skills, commitment to quality and positive work ethic, who can Design, develop, and deploy data management modules, including data ingestion, parsing, scheduling, and processing, using agile practices Conduct unit, system, and integration testing of developed modules, ensuring high-quality data solutions Select and integrate Big Data tools and frameworks to meet capability requirements Collaborate with client and partner teams to deliver effective data solutions and resolve operational issues promptly Stay updated on industry trends and best practices in Data Engineering and Open-Source Big Data technologies Contribute to agile teamwork and follow a personal education plan for technology stack and solution architecture Build processes for data transformation, structures, metadata, and workload management Skills and Experience Relevant experience in data engineering with open-source big data tools (Hadoop, Spark, Kafka) and NoSQL databases (Postgres, MongoDB, Elastic) Skilled in data pipeline management (e.g., Airflow), AWS services (EC2, EMR, RDS), and stream processing (e.g., Spark-Streaming) Proficient in object-oriented and functional programming languages (Python, Java, Scala), and data lake concepts (ingestion, transformation). Strong understanding of Hadoop and Spark frameworks, Agile methodology, and scalable big data architectures Excellent communication, with a self-motivated and analytical mindset for fast-paced environments Experience working with cross-functional teams to optimize big data pipelines and solutions Education Master’s/Bachelor’s in Computer Science Engineering.
Posted 2 days ago
0 years
1 - 2 Lacs
kollam
Remote
We are seeking a highly skilled and detail-oriented Database Administrator (DBA) to join our organization. The ideal candidate will bring deep technical expertise in SQL Server, data governance, and real-time data processing, along with strong proficiency in Microsoft Excel, Power Query, and CRM systems . You will be responsible for maintaining the performance, integrity, and security of our databases while supporting advanced data integration and transformation processes. This is a hands-on role that combines database administration, system optimization, CRM management, and cross-functional collaboration to ensure data accuracy and compliance across all environments. Key Responsibilities: Perform CRM record maintenance and updates, ensuring data integrity across customer platforms. Organize, validate, and analyze processing reports to support business intelligence and operational decisions. Transfer data across multiple environments, ensuring consistency and reliability. Configure internal data systems based on evolving business and technical requirements. Conduct regular audits to enforce data governance standards and ensure compliance with internal and external policies. Perform advanced database administration, including indexing, query optimization, backup/recovery, and managing high-availability solutions. Design, restructure, and maintain databases tailored to dynamic business requirements. Implement data retention and management practices aligned with GDPR and data protection regulations. Support real-time and batch data processing systems for analytics and operational workflows. Research, document, and share technical insights and best practices on an adhoc basis. Collaborate closely with development and operations teams to improve database architecture and system eƯiciency. Deliver comprehensive and creative briefs to external agencies or government bodies, providing insightful data-driven recommendations. Communicate eƯectively with technical and non-technical stakeholders, translating business needs into actionable database solutions. Technical & Functional Requirements: Strong experience with SQL Server, including performance tuning, troubleshooting, and optimization. Advanced Excel skills (formulas, pivot tables, macros, Power Query). Proficient in CRM database management and data maintenance best practices. Hands-on experience with XML for data structuring, parsing, and transformation. Knowledge of data security, data governance, GDPR compliance, and retention policies. Ability to manage high-volume, real-time data environments eƯectively. Familiarity with data integration tools and platforms such as SSIS, AWS Glue, and scripting languages like Python. Experience implementing and managing database backup, recovery, and disaster recovery protocols. Contact the employer- 7558929559 Job Types: Full-time, Permanent Pay: ₹13,000.00 - ₹18,000.00 per month Benefits: Work from home Work Location: Remote
Posted 2 days ago
4.0 years
0 Lacs
chennai
On-site
Chennai, IN Full-Time About Reveleer Reveleer is a healthcare data and analytics company that uses Artificial Intelligence to give health plans across all business lines greater control over their Quality Improvement, Risk Adjustment, and Member Management programs. With one transformative solution, the Reveleer platform enables plans to independently execute and manage every aspect of enrollment, provider outreach and data retrieval, coding, abstraction, reporting, and submissions. Leveraging proprietary technology, robust data sets, and subject matter expertise, Reveleer provides complete record retrieval and review services so health plans can confidently plan and execute risk, quality, and member management programs to deliver more value and improved outcomes. About the Role As a BI Developer, you will collaborate with Initial Validation Audit (IVA), Risk Adjustment, and Quality Improvement (HEDIS) stakeholders to develop dashboards and reports that drive data-informed decision-making. These dashboards and reports will be used both internally and externally to support analytics initiatives. Additionally, you will build internal-facing reports for teams across the organization, including growth, marketing, business development, finance, and operations. Your role will involve managing analytics projects from discovery to implementation, ensuring that business users have access to accurate, timely, and actionable insights. You will work closely with data architects, developers, and analysts to develop analytics products, using Looker as the primary BI tool while also utilizing SQL and Excel for additional data manipulation and analysis. You will also help automate report distribution through scheduled reports, emails, and dashboards, streamlining data delivery to stakeholders. What You'll Do Dashboard Development & Data Visualization – Apply SDLC methodology to design, test, and deploy enterprise dashboards with a strong focus on user experience and effective data visualization. BI Reporting & Analysis – Generate ad-hoc and routine reports, transform raw data into meaningful insights, and provide actionable business intelligence to support data-driven decision-making. Stakeholder Communication & Training – Effectively communicate technical concepts to both clinical and technical audiences and collaborate with business users to understand and address reporting needs. Project & Priority Management – Manage multiple assignments, deadlines, and priorities independently, ensuring timely delivery while maintaining high standards of customer service and documentation. Develop metrics and dashboards with stakeholders to track key performance indicators (KPIs) related to risk score trending, quality improvement, suspecting, member and provider management, financial forecasting, and ROI. Collaborate with data engineering to support and validate data architecture. Provide user assistance and direction for ad-hoc data reporting. Build and refine new analytic processes to better support all lines of business. Maintain structured documentation for dashboards, reports, and data processes using Confluence, Jira, and other documentation tools. Track and manage analytics requests and tasks using Jira, ensuring timely completion and clear stakeholder communication. Write technical documentation to ensure knowledge sharing and reproducibility of BI processes. Must Have 4+ years of experience in BI development with strong SQL skills (minimum 4 years) for query optimization on large data sets. Proficiency in data visualization tools (e.g., Looker). Experience in data parsing, manipulation, and validation. Strong statistical analysis skills for working with large data sets. Experience with ETL processing and knowledge of Data Warehouse solutions, strategies, and implementations. Ability to track and manage BI development tasks in Jira. Strong Excel skills (e.g., pivot tables, VLOOKUPs, visualization). Ability to build trust and communicate insights effectively across technical and business teams. Nice to Have Proficiency in Python for data analysis and automation. Experience with U.S. Government-sponsored health plans (Medicare, Medicaid, ACA) and Managed Care Organizations (MCOs), particularly in Quality, Compliance, and Risk Adjustment reporting and analytics. Expertise in data architecture, including designing and implementing new database tables and optimizing schema structures for improved performance. Hands-on experience managing analytics projects using Jira, including creating custom dashboards, workflows, and tracking project progress efficiently.
Posted 2 days ago
1.5 years
19 - 39 Lacs
noida
On-site
Job Summary: RACE Consulting is hiring a Backend Engineer for its client. The role involves building data ingestion workflows and integrations for multiple external tools and technologies to enable out-of-the-box data collection, observability, security, and ML insights. You will be responsible for developing solutions that expand product capabilities and integrations with global security and cloud providers. Responsibilities: Develop End-to-End API integrations using Python. Create technical documentation and end-user guides. Develop proprietary scripts for parsing events and logging. Create and maintain unit tests for developed artifacts. Ensure quality, relevance, and timely updates of the integration portfolio. Comply with coding standards, directives, and legal requirements. Collaborate with internal teams and external stakeholders (partners, suppliers, etc.). Detect and solve complex issues in integrations. Work with platform teams to enhance and build next-gen tools. Requirements: Bachelor’s degree in Computer Science or related fields (Engineering, Networking, Mathematics). 1.5+ years of experience coding in Python. 1+ years of experience with Linux, Docker, Kubernetes, CI/CD. Experience using web API development and testing tools (e.g., Postman). Proactive, problem-solving mindset with curiosity to innovate. Strong communication skills to work with teams/customers globally. Desired Skills: Knowledge of programming patterns & test-driven development. Knowledge of web API protocols. Experience with Python Unit Testing. Advanced Python (multiprocessing, multithreading). Hands-on with Git and CI/CD pipelines. Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Work Location: In person
Posted 2 days ago
3.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Sr. Splunk SME/Enterprise Monitoring Engineer Splunk SME/Enterprise Monitoring Engineer 3+ years of hands-on experience with Splunk Enterprise as an admin, architect, or engineer. Experience designing and managing large-scale, multi-site Splunk deployments. Strong skills in SPL (Search Processing Language), dashboard design, and ing strategies. Familiarity with Linux systems, scripting (e.g., Bash, Python), and APIs. Experience with enterprise monitoring tools and integration with Splunk (e.g., AppDynamics, Dynatrace, Nagios, Zabbix, etc.). Understanding of logging, metrics, and tracing in modern environments (on-prem and cloud). Strong understanding of network protocols, system logs, and application telemetry. Serve as the SME for Splunk architecture, deployment, and configuration across the enterprise. Maintain and optimize Splunk infrastructure, including indexers, forwarders, search heads, and clusters. Develop and manage custom dashboards, s, saved searches, and visualizations. Implement and tune log ingestion pipelines using Splunk Universal Forwarders, HTTP Event Collector, and other data inputs. Ensure high availability, scalability, and performance of the Splunk environment. Creating dashboards, Reports, s, Advance Splunk Search, Visualization, log parsing and external table lookups Expertise with SPL (Search Processing Language ) and understanding of Splunk architecture, including configuration files. Wide Experience in monitoring and troubleshooting applications using tools like AppDynamics, Splunk, Grafana, Argos ,OTEL, etc. to build observability for large-scale microservice deployments. Creating dashboards for various applications to monitor health, network issues and configure s. Excellent problem-solving, triaging, and debugging skills in large-scale distributed systems Establishing and documenting run books and guidelines for using the multi-cloud infrastructure and microservices platform. Experience in optimized search queries using summary indexing. Solid knowledge and experience in monitoring the Splunk infrastructure. Develop a long-term strategy and roadmap for AI/ML tooling to support the AI capabilities across the Splunk portfolio. Diagnose and resolve network-related issues affecting CI/CD pipelines, debug DNS, firewall, proxy, and SSL/TLS problems, and use tools like tcpdump, curl, and netstat for proactive maintenance. Enterprise Monitoring & Observability Design and implement holistic enterprise monitoring solutions integrating Splunk with tools like AppDynamics, Dynatrace, Prometheus, Grafana, SolarWinds, or others. Collaborate with application, infrastructure, and security teams to define monitoring KPIs, SLAs, and thresholds. Build end-to-end visibility into application performance, system health, and user experience. Integrate Splunk with ITSM platforms (e.g., ServiceNow) for event and incident management automation. Operations, Troubleshooting & Optimization Perform data onboarding, parsing, and field extraction for structured and unstructured data sources. Support incident response and root cause analysis using Splunk for troubleshooting and forensics. Regularly audit and optimize search performance, data retention policies, and index lifecycle management. Create runbooks, documentation, and SOPs for Splunk and monitoring tool usage. Skills Splunk,Devops Tools,Bash,DEVOPS
Posted 2 days ago
4.0 years
0 Lacs
chennai, tamil nadu, india
On-site
About Reveleer Reveleer is a healthcare data and analytics company that uses Artificial Intelligence to give health plans across all business lines greater control over their Quality Improvement, Risk Adjustment, and Member Management programs. With one transformative solution, the Reveleer platform enables plans to independently execute and manage every aspect of enrollment, provider outreach and data retrieval, coding, abstraction, reporting, and submissions. Leveraging proprietary technology, robust data sets, and subject matter expertise, Reveleer provides complete record retrieval and review services so health plans can confidently plan and execute risk, quality, and member management programs to deliver more value and improved outcomes. About The Role As a BI Developer, you will collaborate with Initial Validation Audit (IVA), Risk Adjustment, and Quality Improvement (HEDIS) stakeholders to develop dashboards and reports that drive data-informed decision-making. These dashboards and reports will be used both internally and externally to support analytics initiatives. Additionally, you will build internal-facing reports for teams across the organization, including growth, marketing, business development, finance, and operations. Your role will involve managing analytics projects from discovery to implementation, ensuring that business users have access to accurate, timely, and actionable insights. You will work closely with data architects, developers, and analysts to develop analytics products, using Looker as the primary BI tool while also utilizing SQL and Excel for additional data manipulation and analysis. You will also help automate report distribution through scheduled reports, emails, and dashboards, streamlining data delivery to stakeholders. What You'll Do Dashboard Development & Data Visualization – Apply SDLC methodology to design, test, and deploy enterprise dashboards with a strong focus on user experience and effective data visualization. BI Reporting & Analysis – Generate ad-hoc and routine reports, transform raw data into meaningful insights, and provide actionable business intelligence to support data-driven decision-making. Stakeholder Communication & Training – Effectively communicate technical concepts to both clinical and technical audiences and collaborate with business users to understand and address reporting needs. Project & Priority Management – Manage multiple assignments, deadlines, and priorities independently, ensuring timely delivery while maintaining high standards of customer service and documentation. Develop metrics and dashboards with stakeholders to track key performance indicators (KPIs) related to risk score trending, quality improvement, suspecting, member and provider management, financial forecasting, and ROI. Collaborate with data engineering to support and validate data architecture. Provide user assistance and direction for ad-hoc data reporting. Build and refine new analytic processes to better support all lines of business. Maintain structured documentation for dashboards, reports, and data processes using Confluence, Jira, and other documentation tools. Track and manage analytics requests and tasks using Jira, ensuring timely completion and clear stakeholder communication. Write technical documentation to ensure knowledge sharing and reproducibility of BI processes. Must Have 4+ years of experience in BI development with strong SQL skills (minimum 4 years) for query optimization on large data sets. Proficiency in data visualization tools (e.g., Looker). Experience in data parsing, manipulation, and validation. Strong statistical analysis skills for working with large data sets. Experience with ETL processing and knowledge of Data Warehouse solutions, strategies, and implementations. Ability to track and manage BI development tasks in Jira. Strong Excel skills (e.g., pivot tables, VLOOKUPs, visualization). Ability to build trust and communicate insights effectively across technical and business teams. Nice to Have Proficiency in Python for data analysis and automation. Experience with U.S. Government-sponsored health plans (Medicare, Medicaid, ACA) and Managed Care Organizations (MCOs), particularly in Quality, Compliance, and Risk Adjustment reporting and analytics. Expertise in data architecture, including designing and implementing new database tables and optimizing schema structures for improved performance. Hands-on experience managing analytics projects using Jira, including creating custom dashboards, workflows, and tracking project progress efficiently.
Posted 2 days ago
0 years
0 Lacs
india
Remote
Location: Remote (Applications open worldwide) Compensation: $20,000 – 40,000 / year (based on experience and scope of ownership) Skills: Semantic Search, Vector Databases, Prompt Engineering, GenAI Frameworks, React Agents, Graph Agents, Document Parsing, Python, Scalable APIs About AnswerThis AnswerThis is an AI-powered research platform built to eliminate the most time-consuming part of academic research: the literature review. We serve researchers globally, from PhD students to academic professionals, helping them go from query to a comprehensive, cited review in minutes. Our system understands how papers connect, ranks relevance, summarizes insights, and maps citations. This removes weeks of manual work. But that is just the beginning. We are building a vertically integrated AI research suite that streamlines every stage of academic discovery. If you want to design algorithms that directly accelerate science in medicine, AI, climate, and more, this is the place to do it. The Role We are hiring an AI Engineer to lead the development of the core intelligence behind AnswerThis. This role requires deep expertise in information retrieval, scalable backend systems, and generative AI. You will: Build and optimize semantic search pipelines, graph-based retrieval, and agentic systems. Design and deploy scalable APIs and services capable of serving hundreds of concurrent requests. Architect, implement, and productionize vector databases and embedding-based retrieval systems. Work on GenAI integrations including react agents, graph agents, prompt engineering, and LLM orchestration. Set up full-stack search engines from scratch that can handle real-world academic workloads. Contribute directly to the infrastructure that makes cutting-edge science more efficient. This is a hands-on role with real ownership. You will move fast, experiment, and ship production systems that impact thousands of researchers worldwide. Requirements Strong background in semantic search, vector databases, LLM frameworks, agent-based systems, and document retrieval. Skilled in Python for backend development with the ability to build scalable APIs and services. Past experience deploying systems at scale, including search engines, retrieval pipelines, or embedding models. Ability to design infrastructure that serves hundreds of concurrent requests. Experience setting up and hosting APIs with public access while balancing speed, cost, and reliability. A research-driven and pragmatic mindset with a focus on measurable outcomes. Compensation $20,000 – 40,000 / year depending on experience, expertise, and scope of ownership. This is a full-time role with significant responsibility and long-term upside. How to Apply Send the following to careers@answerthis.io Subject: AI Engineer Application Include: Your resume. Details of past projects, especially production-level work with search, retrieval, or GenAI pipelines. A short note on why you want to design algorithms at AnswerThis. Applications without these will not be considered. Let us build the future of research together.
Posted 2 days ago
5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Sr. Splunk SME/Enterprise Monitoring Engineer Splunk SME/Enterprise Monitoring Engineer 5+ years of experience in IT infrastructure, DevOps, or monitoring roles. 3+ years of hands-on experience with Splunk Enterprise as an admin, architect, or engineer. Experience designing and managing large-scale, multi-site Splunk deployments. Strong skills in SPL (Search Processing Language), dashboard design, and ing strategies. Familiarity with Linux systems, scripting (e.g., Bash, Python), and APIs. Experience with enterprise monitoring tools and integration with Splunk (e.g., AppDynamics, Dynatrace, Nagios, Zabbix, etc.). Understanding of logging, metrics, and tracing in modern environments (on-prem and cloud). Strong understanding of network protocols, system logs, and application telemetry. Serve as the SME for Splunk architecture, deployment, and configuration across the enterprise. Maintain and optimize Splunk infrastructure, including indexers, forwarders, search heads, and clusters. Develop and manage custom dashboards, s, saved searches, and visualizations. Implement and tune log ingestion pipelines using Splunk Universal Forwarders, HTTP Event Collector, and other data inputs. Ensure high availability, scalability, and performance of the Splunk environment. Creating dashboards, Reports, s, Advance Splunk Search, Visualization, log parsing and external table lookups Expertise with SPL (Search Processing Language ) and understanding of Splunk architecture, including configuration files. Wide Experience in monitoring and troubleshooting applications using tools like AppDynamics, Splunk, Grafana, Argos ,OTEL, etc. to build observability for large-scale microservice deployments. Creating dashboards for various applications to monitor health, network issues and configure s. Excellent problem-solving, triaging, and debugging skills in large-scale distributed systems Establishing and documenting run books and guidelines for using the multi-cloud infrastructure and microservices platform. Experience in optimized search queries using summary indexing. Solid knowledge and experience in monitoring the Splunk infrastructure. Develop a long-term strategy and roadmap for AI/ML tooling to support the AI capabilities across the Splunk portfolio. Diagnose and resolve network-related issues affecting CI/CD pipelines, debug DNS, firewall, proxy, and SSL/TLS problems, and use tools like tcpdump, curl, and netstat for proactive maintenance. Enterprise Monitoring & Observability Design and implement holistic enterprise monitoring solutions integrating Splunk with tools like AppDynamics, Dynatrace, Prometheus, Grafana, SolarWinds, or others. Collaborate with application, infrastructure, and security teams to define monitoring KPIs, SLAs, and thresholds. Build end-to-end visibility into application performance, system health, and user experience. Integrate Splunk with ITSM platforms (e.g., ServiceNow) for event and incident management automation. Operations, Troubleshooting & Optimization Perform data onboarding, parsing, and field extraction for structured and unstructured data sources. Support incident response and root cause analysis using Splunk for troubleshooting and forensics. Regularly audit and optimize search performance, data retention policies, and index lifecycle management. Create runbooks, documentation, and SOPs for Splunk and monitoring tool usage. Skills Splunk,Devops Tools,Bash,DEVOPS
Posted 2 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Who Are We Raptee.HV is a full- stack electric motorcycle startup with a very strong technical moat, founded in 2019 by four engineers from Chennai (Ex Tesla, Wipro), working on bringing a no-compromise upgrade motorcycle to an otherwise scooter- only EV market. Raptee is incubated at CIIC & ARAI. Role Overview We are seeking a highly motivated and talented Data Engineer Intern to play a pivotal role in establishing our data infrastructure. This is a unique greenfield opportunity to build our data practice from the ground up. The ideal candidate is a proactive problem-solver, passionate about transforming raw data into actionable insights, and excited by the challenge of working with complex datasets from IoT devices and user applications in the electric vehicle (EV) domain. You will be instrumental in creating the systems that turn data into intelligence. What You’ll Do ETL Pipeline Development: Design, build, and maintain foundational ETL (Extract, Transform, Load) processes to ingest and normalize data from diverse sources, including vehicle sensors, user applications, JSON/CSV files, and external APIs. Data Analysis & Trend Discovery: Perform exploratory data analysis to uncover trends, patterns, and anomalies in large datasets. Your insights will directly influence product strategy and development. Insight Visualization: Develop and manage interactive dashboards, reports, and data visualization applications to communicate key findings and performance metrics to both technical and non-technical stakeholders. Data Infrastructure: Assist in setting up and managing scalable data storage solutions, ensuring data integrity, security, and accessibility for analysis. Cross-Functional Collaboration: Work closely with engineering and product teams to understand data requirements and help integrate data-driven decision-making into all aspects of our operations. Who Can Apply? Strong proficiency in Python and its core data manipulation libraries (e.g., Pandas, NumPy). Solid understanding of SQL for querying and managing relational databases. Demonstrable experience in parsing and handling common data formats like JSON and CSV. Excellent analytical and problem-solving skills with a meticulous attention to detail. Currently pursuing or recently completed a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, Statistics, or a related quantitative field. Preferred Qualifications (What Will Set You Apart) Hands-on experience with at least one major cloud platform (e.g., AWS, Google Cloud Platform). Familiarity with data visualization libraries (e.g., Matplotlib, Seaborn, Plotly) or business intelligence tools (e.g., Tableau, Power BI). Basic understanding of machine learning concepts. Previous experience working with REST APIs to retrieve data. A genuine interest in the electric vehicle (EV) industry, IoT data, or sustainable technology. What’s In It For You Invaluable hands-on experience building a data ecosystem from scratch in a high-growth, cutting-edge industry. The opportunity to apply your skills to real-world challenges by working with large-scale, complex datasets. A chance to develop a versatile skill set across the entire data lifecycle, from engineering and pipeline development to advanced analysis and visualization. The ability to make a direct and tangible impact on a product that is shaping the future of mobility.
Posted 2 days ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Title: Data scientist Experience: 5+ Years Location: Hyderabad Job Type: Full-Time(Work from Office) Job Summary: Data Scientist to play a critical role in building the next generation machine learning applications and services for the global sourcing and procurement operations. The candidate will be part of a team with a goal to innovate and build data driven solutions to support sourcing and procurement operations. We are one of the fastest growing teams across Amazon with strong technology orientation. As an Data Scientist, he will have the opportunity to play a key role in driving the development of key features for our customers. He will take on challenging problems and deliver solutions that either leverage existing academic and industrial research, or utilize his own out-of-the-box pragmatic thinking. Key job responsibilities: • Analyse and extract relevant information from large amounts of data to help automate and optimize key processes. • Research, develop and implement novel machine learning, LLM and statistical approaches for improving the user experience in the procurement domain. • Use machine learning and analytical techniques to create scalable solutions for business problems. Skill Sets : 5+ years of building models for business application experience Experience programming in Python Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
Posted 2 days ago
4.0 years
0 Lacs
kochi, kerala, india
On-site
Job Description Designation: Senior Backend Data & Integration Engineer (Python/TypeScript, Azure, Integrations) Experience: 4 -8 Years Location: Cochin Job Summary Build data pipelines (crawling/parsing, deduplication/delta, embeddings) and connect external systems and interfaces. Key Responsibilities: • Development of crawling/fetch pipelines (API-first; playwright/requests only where permitted) • Parsing/normalization of job postings & CVs, deduplication/delta logic (seen hash, repost heuristics) • Embeddings/similarity search (controlling Azure OpenAI, vector persistence in pgvector) • Integrations: HR4YOU (API/webhooks/CSV import), SerpAPI, BA job board, email/SMTP • Batch/stream processing (Azure Functions/container jobs), retry/backoff, dead-letter queues • Telemetry for data quality (freshness, duplicate rate, coverage, cost per 1,000 items) • Collaboration with FE for exports (CSV/Excel, presigned URLs) and admin configuration Must Have Requirements: • 4+ years of backend/data engineering experience • Python (FastAPI, pydantic, httpx/requests, Playwright/Selenium), solid TypeScript for smaller services/SDKs • Azure: Functions/Container Apps or AKS jobs, Storage/Blob, Key Vault, Monitor/Log Analytics • Messaging: Service Bus/Queues, idempotence & exactly-once semantics, pragmatic approach • Databases: PostgreSQL, pgvector, query design & performance tuning • Clean ETL/ELT patterns, testability (pytest), observability (OpenTelemetry) Nice-to-have: • NLP/IE experience (spaCy/regex/rapidfuzz), document parsing (pdfminer/textract) • Experience with license/ToS-compliant data retrieval, captcha/anti-bot strategies (legally compliant) • Working method: API-first, clean code, trunk-based development, mandatory code reviews • Tools/stack: GitHub, GitHub Actions/Azure DevOps, Docker, pnpm/Turborepo (Monorepo), Jira/Linear, Notion/Confluence
Posted 2 days ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description: Job Title: Data scientist Experience: 5+ Years Location: Hyderabad Job Type: Full-Time(Work from Office) Job Summary: Data Scientist to play a critical role in building the next generation machine learning applications and services for the global sourcing and procurement operations. The candidate will be part of a team with a goal to innovate and build data driven solutions to support sourcing and procurement operations. We are one of the fastest growing teams across Amazon with strong technology orientation. As an Data Scientist, he will have the opportunity to play a key role in driving the development of key features for our customers. He will take on challenging problems and deliver solutions that either leverage existing academic and industrial research, or utilize his own out-of-the-box pragmatic thinking. Key job responsibilities: • Analyse and extract relevant information from large amounts of data to help automate and optimize key processes. • Research, develop and implement novel machine learning, LLM and statistical approaches for improving the user experience in the procurement domain. • Use machine learning and analytical techniques to create scalable solutions for business problems. Skill Sets : 5+ years of building models for business application experience Experience programming in Python Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
Posted 2 days ago
7.0 years
0 Lacs
mohali district, india
On-site
About the Role: We are looking for a Senior Software Engineer to work on core systems powering health data pipelines, secure user messaging, image uploads, and AI-enabled features. You will contribute to building robust microservices, efficient data pipelines, and modular infrastructure to support our next-gen health platform. Responsibilities: Develop and maintain microservices for: Health data exchange and real-time access Chat/messaging and AI chatbot experiences Image upload, parsing, and visualization pipelines Write clean, testable code in Kotlin with Spring Boot Work with Redis for caching, OpenSearch for fast querying, and PostgreSQL for persistence Follow established engineering practices around testing, deployments, and logging Participate in architectural discussions, code reviews, and optimization efforts Work cross-functionally with QA, product, DevOps, and design teams Qualifications: 4–7 years of backend engineering experience Proficiency in Kotlin (or Java) and Spring Boot Knowledge of Redis, OpenSearch, PostgreSQL Experience building microservices and RESTful APIs Exposure to messaging platforms, chat engines, or AI conversation systems Basic understanding of image processing or file handling in backend systems Comfortable in CI/CD, containerized deployments, and observability tooling
Posted 2 days ago
4.0 years
0 Lacs
hyderābād
On-site
About this role: Wells Fargo is seeking a Senior Software Engineer. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: B.Tech or equivalent Degree Min 4+ years of experience in the development projects. Hands-on experience with Harness for CI/CD management Experience in handling Python projects on OCP environment Proven track record of building automation solutions using Python and Powershell Exposure to AI/ML model integration or usage of LLMs in engineering tools or automation workflows Experience with Cloud Services and infrastructure Automation (GCP, OCP) Familiarity with containerization (Docker, Kubernetes) and monitoring tools Job Expectations: Lead the requirement gathering, understand the justification for ROI, design, Develop and implementation of AI-driven automation solutions across infrastructure, operations and engineering workflows Evaluate and adopt appropriate AI/ML models or LLMs to automate decision making or streamline manual engineering tasks Guide the team in using Python and scripting (PowerShell/Bash) to build scalable automation pipelines. Build REST APIs, data parsing tools, and integration scripts using Python and third party libraries Having strong development experience in JavaScript, TypeScript, React, Flask. Proven experience of developing front end pages and backend with Python. Experience in using the tools App Dynamics EUM, Grafana, SPLOC, BigPanda, Prometheus, Dynatrace Have to work on OpenTelemetry by coordinating with enterprise OTel Team and Vertical Platform Teams Design, Configure and maintain robust CI/CD pipelines using Github and Harness Ensure reliable deployment, rollback strategies and environment configuration management Define metrics and implement observability for the entire CI/CD pipeline Create and manage infrastructre-as-code (IaC) solutions (using Terraform, Powershell DSC) Automate route infrastructure tasks and integrations with cloud platforms Work closely with Production Support, Development and Infrastructure teams to understand automation needs Translate complex technical needs into actionable development plans Provide regular updates, demos and documentation of solutions and automation tools Stay current with the latest trends in AI, MLOps, automation tools and cloud-native practices Identify opportunities to reduce manual toil and improve deployment speed, accuracy and repeatability Posting End Date: 17 Sep 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.
Posted 3 days ago
4.0 years
0 Lacs
kochi, kerala, india
On-site
Greetings from Beo Software Pvt Ltd. We are a German Headquartered ( BEO Gmbh ) IT solutions and services company based in Kochi. To improve our clients' operational efficiencies, we bring time-tested methodologies, proven processes, and deep expertise in software development, as well as a legacy of best practices. We provide full stack development services and serve as an extended office for various clients across Europe. Please find the details below: Designation : Senior Backend Data & Integration Engineer (Python/Typescript, Azure, Integrations) Experience : 4+ Years Job Location : Kochi, Kerala • Development of crawling/fetch pipelines (API-first; playwright/requests only where permitted). • Parsing/normalization of job postings & CVs, deduplication/delta logic (seen hash, repost heuristics). • Embeddings/similarity search (controlling Azure OpenAI, vector persistence in pgvector). • Integrations: HR4YOU (API/webhooks/CSV import), SerpAPI, BA job board, email/SMTP. • Batch/stream processing (Azure Functions/container jobs), retry/backoff, dead-letter queues. • Telemetry for data quality (freshness, duplicate rate, coverage, cost per 1,000 items). • Collaboration with FE for exports (CSV/Excel, presigned URLs) and admin configuration. • 4+ years of backend/data engineering experience. • Python (FastAPI, pydantic, httpx/requests, Playwright/Selenium), solid TypeScript for smaller services/SDKs. • Azure: Functions/Container Apps or AKS jobs, Storage/Blob, Key Vault, Monitor/Log Analytics. • Messaging: Service Bus/Queues, idempotence & exactly-once semantics, pragmatic approach. • Databases: PostgreSQL, pgvector, query design & performance tuning. • Clean ETL/ELT patterns, testability (pytest), observability (OpenTelemetry). • NLP/IE experience (spaCy/regex/rapidfuzz), document parsing (pdfminer/textract). • Experience with license/ToS-compliant data retrieval, captcha/anti-bot strategies (legally compliant). • Working method: API-first, clean code, trunk-based development, mandatory code reviews. • Tools/stack: GitHub, GitHub Actions/Azure DevOps, Docker, pnpm/Turborepo (Monorepo), Jira/Linear, Notion/Confluence. • On-call/support: rotating, "you build it, you run it". Why join us? We give numerous opportunities for career advancement and a remuneration package that rivals the best in the industry. We are certified “Great Place to Work” We offer a welcoming work environment with excellent work-life balance.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The parsing job market in India is thriving, with a growing demand for professionals skilled in parsing techniques across various industries. Employers are actively seeking individuals who can effectively extract and analyze structured data from different sources. If you are a job seeker looking to explore parsing roles in India, this article will provide you with valuable insights and guidance.
These cities are known for their vibrant tech industries and offer numerous opportunities for individuals interested in parsing roles.
The average salary range for parsing professionals in India varies based on experience levels. Entry-level positions can expect to earn between INR 3-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 8-15 lakhs per annum.
In the field of parsing, a typical career path may include the following progression: - Junior Developer - Software Engineer - Senior Developer - Tech Lead - Architect
As professionals gain experience and expertise in parsing techniques, they can advance to higher roles with increased responsibilities.
In addition to parsing skills, individuals pursuing roles in this field are often expected to possess or develop the following skills: - Data analysis - Programming languages (e.g., Python, Java) - Knowledge of databases - Problem-solving abilities
As you prepare for parsing roles in India, remember to showcase your expertise in parsing techniques and related skills during interviews. Stay updated with the latest trends in the field and practice answering common interview questions to boost your confidence. With dedication and perseverance, you can secure a rewarding career in parsing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |