Home
Jobs

584 Parsing Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Engineer, Quality Engineering Noida, India Information Technology 316034 Job Description About The Role: Grade Level (for internal use): 09 The Team: The Parsing team is responsible for managing and developing our real-time quote parsing system, a key component of our wide-ranging Data Valuations and Analytics ecosystem. Our dynamic service parses, stores, and delivers real-time OTC pricing content from market content – the service parses 90+ million quotes, along with commentary, from 5+ million email messages daily. We provide traders, portfolio managers, asset managers, valuators, and others in the financial space with structured indicative over-the-counter pricing data for 40+ financial instrument types, notably CDS, Indices, Bonds, Loans, and Securitized Products (including CMBS, CLO, RMBS, etc.) useful for multiple use cases across front office and middle office. The Impact: We are looking for a Quality Analyst to join our global technology team. You will create, design, and maintain test suits in the exciting and complex Parsing universe, which interacts with 15+ products and multiple data delivery systems. Our team strives to provide the highest quality data possible to our clients every day. Working with other developers, product subject matter experts, quality assurance and technical teams, you will contribute to the continuous improvement of the product by learning and developing expertise in our content and applications, testing new features, refactoring and migrating the existing codebase to new technologies, providing production support, sharing of information, managing publication of data via our various delivery methods, and more. Our talented team is often called upon to create new, innovative components and solutions in response to the rapid changes in the world financial and regulatory markets. What’s in it for you: Build your career with a respected, global company recognized as a great place to work. Opportunity to work on code and technologies that fuel the global financial markets. Grow and improve your skills by working on enterprise level products, new technologies, and cloud-based architecture – in a fast-paced, nimble team environment where creativity, dedication, and passion are valued. Develop expertise in the OTC financial markets and specific asset types. Work on a product that impacts 15+ internal products and produces $100m of direct and indirect revenue. Be a part of a smart, talented, motivated, dynamic, and supportive team! Responsibilities: Be an advocate and champion for quality as a mindset within the organization. Develop testing strategies and inject quality into a cross-functional team building new capabilities for some of the biggest names in the financial industry. Use your expertise to setup automation and testing suites across various UI components, data components and API’s. Create, review, and recommend test plan, strategies, and test case documentation. Participate in all Agile team ceremonies and inject quality discussions at all possible times. Work closely with development and product groups to ensure deadlines are met. Execute and write manual and automated testing. Ensure all defects are prioritized and identified. Participate in release planning for quality related version control. Accurately report quality status and progress to Management Teams What We’re Looking For: Good experience in automation/white box testing using Java. Exp - between 3 to 6 years Able to switch between manual and automation testing as per team requirements. Experience in designing high efficiency tests for scalable, distributed, fault-tolerant applications. Strong understanding of test design techniques (Boundary value Analysis and Equivalence partitioning) Database knowledge with ability to frame basic queries. Must have strong analytical and creative problem-solving skills. Proactive and able to work independently with minimal supervision. Other helpful experience and skills: Experience with cloud-based infrastructure, e.g., AWS. Experience in BDD process. Python hands-on will be an added advantage. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316034 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Type: Full-Time Experience: 5–8 years (Full Stack / Platform Development / AI Integration) The Form is mandatory: https://forms.gle/j6RdvPgSc42AL1fs9 Please submit the form to nominate yourself for this role. 🧩 About the Opportunity We are developing a suite of platforms and AI-based tools for the energy, training, and consulting sectors. Our first product is a modular live training platform that integrates workflows like user onboarding, free sessions, payments, and session automation. We are hiring a Senior Full Stack Developer who can not only build scalable web systems but also help shape AI-integrated tools , microservice-based utilities , and client-facing product demos in future phases. You will work directly with the Solution Architect (Founder-led) and gain exposure to product strategy, architecture, and LLM-powered innovation. 🔧 Key Responsibilities 🛠️ Core Platform Development Build modular REST APIs using Node.js (Express/NestJS) or Python (FastAPI/Django) Develop responsive frontend components in React (preferred) or Vue Model relational databases using PostgreSQL or MySQL Implement secure JWT-based authentication , RBAC, and form validation Integrate with 3rd-party APIs: Payments (Stripe, Razorpay, UPI, others) Email & automation (SendGrid, Mailchimp) Meetings (Microsoft Teams / Graph API) Assist in deployment to Azure App Service or Google Cloud Run (with guidance) Build and maintain new user-facing features and enhancements using Next.js, Material UI, and TypeScript in an Agile software development environment. Develop performant Next.js APIs that interact with large, complex datasets to manage user accounts and data submissions. Work closely with software developers, UI/UX designers, and project managers to implement high-quality features on time. Transform both front-end and back-end business requirements into secure, flexible, and scalable results. Participate in code reviews, fixing bugs, and troubleshooting software issues. Review and support in refining feature requirements and testing plans. Translate user flows, wireframes, and mockups into intuitive user experiences for a wide range of devices. 🤖 AI & Product Tooling (Next Phase) Create AI-powered web tools and demos using: OpenAI / Azure OpenAI / Claude / Gemini APIs LangChain or similar libraries Prompt engineering and token handling Build tools using RAG (Retrieval-Augmented Generation) : Vector DBs (ChromaDB, FAISS, Weaviate) Document parsing + embedding flows Wrap AI flows into lightweight portals, internal dashboards, or demo utilities Participate in brainstorming and prototyping client-facing solutions 🔁 Dev Workflow & Internal Automation Support CI/CD pipeline setup , Git-based deployments, and cloud configs Work with task queues (Celery, BullMQ, etc.) for background jobs Develop reusable modules (auth, logging, alerts) across services ✅ Required Skills & Experience 5–8 years of full stack or backend-focused experience Strong in: React.js (hooks, forms, components) Node.js OR Python (FastAPI/Django) PostgreSQL/MySQL – schema, joins, performance Deep understanding of: Modular architecture & microservices REST APIs , status codes, pagination, error handling JWT, RBAC , secure data access patterns Workflow integration mindset: connecting APIs, triggers, and user flows Git, code reviews, environment config, API documentation ⭐ Bonus Skills (Preferred, Not Mandatory) Familiarity with: LLM APIs (OpenAI, Claude, Gemini) LangChain, embeddings, vector stores Prompt tuning , chat UI, summarizers Experience with: CI/CD (GitHub Actions or similar) Azure App Services / Google Cloud Run Docker , .env configs, log monitoring Worked on SaaS platforms , internal tools, or POCs for AI integration 🌟 What You’ll Get A real chance to build meaningful products from the ground up Direct mentoring from a Solution Architect with strong domain and AI background Opportunity to work on both platform and AI product prototypes Potential to grow into long-term technical leadership or innovation roles 📩 How to Apply ✅ Filling this form is mandatory to be considered: 🔗 https://forms.gle/j6RdvPgSc42AL1fs9 Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Purpose As a key member of the DTS team, you will primarily collaborate closely with a leading global hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Essential Skills Desired Skills and Experience 3-5 years of Python experience, including advanced concepts like async and OOP. Proficiency in data manipulation and dashboard creation using NumPy, Matplotlib, and Streamlit/Panel. Experience with SQL (PostgreSQL) and NoSQL (MongoDB) databases; familiarity with Pinecone is a plus. Knowledge of ETL pipelines and tools like Snowflake, DBT, Azure Data Factory, Azure Functions, and Azure Blob Storage. Experience with financial and/or alternative data products. Familiarity with version control systems such as Git. Experience in PDF parsing and test cases using Python. Education: B.E./B.Tech in Computer Science or related field Key Responsibilities Partner with data team to cater to development and automation needs of internal and external stakeholders Generate dashboards using python Fixing existing process deployed on Azure so basic experience or ready to learn things on the go Collaborate with core engineering team to create central capabilities to process, manage, monitor, and distribute datasets at scale Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of datasets Engage with technical and non-technical clients as SME on data asset offerings Key Metrics Python (async and OOP), SQL (PostgreSQL, MongoDB), Streamlit/Panel. Data Engineering and pipelines, Azure Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders Show more Show less

Posted 1 month ago

Apply

3.0 - 10.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

3.0 - 10.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

3.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

About The Role We are looking for an experienced Automation Specialist who can build, manage, and optimize workflow automations using Zapier , Make.com , and other no-code platforms. This is a technical, hands-on role focused on developing real-world automation solutions across marketing, operations, CRM, and more. You'll collaborate with project managers and engineering teams, work with APIs and scripting logic, and help clients streamline their business processes. Key Responsibilities Design and implement automation workflows using Zapier and Make.com Integrate third-party APIs, manage webhooks, and handle data parsing in JSON/XML Collaborate with cross-functional teams to understand and automate business use cases Debug, test, and improve automation flows for performance and scalability Write custom functions/scripts where needed using JavaScript, Python, or JSON Document workflows and maintain technical clarity in internal documentation Enhance and refactor existing automations based on evolving business needs Required Skills & Experience 2+ years of hands-on experience in workflow automation using Make.com, Zapier, or similar platforms Strong understanding of API integrations, logic modules, and conditional operations Proficiency in working with webhooks, arrays, filters, iterators, and data formatting Experience with JSON, XML, and basic scripting (JavaScript or Python preferred) Strong communication skills with the ability to present and explain technical solutions Bachelor's degree in Computer Science, IT, or a related field Good to Have (Not Mandatory) Experience with tools like Airtable, Notion, Slack, Google Workspace Prior experience in BPO, client onboarding, or automation consulting Familiarity with databases (SQL or NoSQL) and cloud-based integration services Exposure to custom app building or low-code platforms What We Offer Opportunity to work on high-impact automation solutions for clients across Europe, Asia, and Africa Work with cutting-edge tools in AI, automation, and low-code/no-code development Collaborative, growth-oriented team culture Continuous learning and skill development opportunities Flexible working hours with remote/hybrid options Skills: zapier,nocode,api integrations,data formatting,data parsing,webhooks,workflow automation,make.com,automation,javascript,json,python,xml,platforms Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

As Lead Splunk, Your Role And Responsibilities Would Include Hands-on experience in the SIEM domain Deep understanding of Splunk backend operations (UF, HF, SH, and Indexer Cluster) and architecture Strong knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. Expertise in optimizing logs and license usage. Solid understanding of designing, deploying, and implementing scalable SIEM architecture. Understanding of data parsimony as a concept, especially in terms of German data security standards. Working knowledge of integrating Splunk logging infrastructure with third-party observability tools like ELK and DataDog. Experience in identifying the security and non-security logs and applying appropriate filters to route the logs correctly. Expertise in understanding network architecture and identifying the components of impact. Proficiency in Linux administration. Experience with Syslog. Proficiency in scripting languages like Python, PowerShell, or Bash for task automation. Expertise with OEM SIEM tools, preferably Splunk. Experience with open-source SIEM/log storage solutions like ELK or Datadog. Strong documentation skills for creating high-level design (HLD), low-level design (LLD), implementation guides, and operation manuals. Skills: siem,linux administration,team collaboration,communication skills,architecture design,python,parsing,normalization,retention practices,powershell,data security,log management,bash,splunk,log collection,documentation,syslog,incident response,data analysis Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Qualification : Bsc IT/CS, Msc IT/CS, B.E (IT, Computer science Etc), BCA/MCA. Key Responsibilities - Manage and maintain Zabbix monitoring infrastructure, including triggers, items, templates, hosts, and escalation workflows. - Proactively monitor Linux and Windows servers, virtual machines, and hosting environments using Zabbix. - Develop, maintain, and troubleshoot Bash scripts for automation, alert parsing, and performance reporting. - Handle Acronis Cyber Protect (Cloud or On-premise) backup solutions: job creation, restoration, and issue resolution. - Perform backup status reviews, job monitoring, restore verification, and incident tracking. - Collaborate with L1 and other technical teams to address and resolve monitoring and backup-related escalations. - Document standard operating procedures (SOPs), known error databases (KEDBs), and backup policies. - Support alert tuning, false-positive reduction, and actionable monitoring improvement initiatives. Required Skills - Zabbix Monitoring: 2+ years of hands-on experience managing enterprise Zabbix environments. - Bash Scripting: 1+ year of experience in writing/modifying scripts for system monitoring and automation. - Linux and Windows Server Management: 2+ years in operational server monitoring, preferably in hosting environments. - Experience with Acronis backup products for data protection, replication, and disaster recovery. - Knowledge of system health metrics (CPU, RAM, disk, services), log analysis, and troubleshooting tools. - Familiarity with firewall rules, agent connectivity, and whitelisting processes for cloud-based monitoring/backup tools. Soft Skills - Strong troubleshooting and analytical abilities. - Clear and concise communication skills, especially for escalation reporting. - Ability to work under minimal supervision and handle pressure situations effectively. - Willingness to work in rotational shifts and support after-hours incidents (on-call as needed). Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description The Amazon Last Mile Geospatial team builds systems that model the real world to enable routing for drivers. We build, maintain and vend base map data, road network data, map tiles, geocodes of addresses and time estimates for service as well as transit times. We also provide a shortest path service to find fastest paths between locations and a service to optimize consolidation of stops. Together these systems help us get better at determining the locations that we go to deliver packages, figure out how to get to those locations and to estimate the effort of delivery for planning. While it may be easy to say “Why build yet another Maps?” as a first reaction, as we go deeper into our problems, the answer becomes increasingly clear and challenging. We are building systems that enable depth focused solutions. For example, we are interested in not only getting a person to an address like 300 Boren Ave N, we are also interested in helping them find out if there is a mailing room in the building and if there is, helping them navigate quickly to that mailing room. We are also interested in accurately estimating how long it would take to arrive at the address, find the mailing room and drop a package there. We will incorporate the ability to leverage mass transit, multiple modes of transportation and traffic awareness to find the most efficient paths for our drivers. We are also interested in making it easy to calculate paths on cheap mobile devices or in simplifying the process to find an efficient path to cover hundreds of delivery points. Several of these problems require us in building systems that can work with an ensemble of models as well as support the right segmentation of inputs to make good estimates on the outputs. There are several unsolved or partially solved problems in this space such as automatically adding new roads detected from sensor/video data into the larger road graph, deterministically detecting if a new road is in fact just a modification to an existing road (such as a change in curvature of an existing road due to a new sidewalk), accurately determining the bearing of a person when they start traveling leveraging only a single and single IMU sensor source, parsing unstructured addresses such as in countries like India, processing alternate solutions within microseconds on a mobile device without talking to a backend service and so on. The right person for this space would enjoy working in a space that requires constantly pushing both the research and technology boundaries to unlock solutions to such problems. Our key output metrics include location accuracy, coverage and accuracy of our road network for routing and users to the correct location, predictive accuracy of service and transit estimates. We also measure the operational impact of these inputs on delivery success and on the gaps between actual versus planned on-zone times, transit times and service times. If you have an entrepreneurial spirit, know how to deliver, are deeply technical, highly innovative and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. #lastmile #maps_intelligence #sensor_intelligence Key job responsibilities Participate in the design, implementation, and deployment of successful large-scale systems and services in support of our fulfillment operations and the businesses they support. Participate in the definition of secure, scalable, and low-latency services and efficient physical processes. Work in expert cross-functional teams delivering on demanding projects. Functionally decompose complex problems into simple, straight-forward solutions. Understand system interdependencies and limitations. Share knowledge in performance, scalability, enterprise system architecture, and engineering best practices. Basic Qualifications Bachelor's degree in computer science or equivalent 2+ years of non-internship professional software development experience 2+ years of programming using a modern programming language such as Java, C++, or C#, including object-oriented design experience 1+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Preferred Qualifications Experience building complex software systems that have been successfully delivered to customers Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2895142 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

As Lead Splunk, your role and responsibilities would include: Hands on experience in the SIEM domain o Expert knowledge on splunk> Backend operations (UF, HF, SH and Indexer Cluster) and architecture o Expert knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. o Expert in Logs/License optimization techniques and strategy. o Good Understanding of Designing, Deployment & Implementation of a scalable SIEM Architecture . o Understanding of data parsimony as a concept, especially in terms of German data security standards. o Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) o Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. o Expert in understanding the Network Architecture a nd identifying the components of impact. o Expert in Linux Administration. o Proficient in working with Syslog . o Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . o Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

DESCRIPTION DESCRIPTION The Digital Acceleration (DA) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms for solving Digital businesses problems. Key job responsibilities Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML. Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. Engage in effective technical communication (written & spoken) with coordination across teams. Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. Publish research papers in internal and external venues of repute Support on-call activities for critical issues Basic Qualifications Experience building machine learning models or developing algorithms for business application PhD, or a Master's degree and experience in CS, CE, ML or related field Knowledge of programming languages such as C/C++, Python, Java or Perl Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Proficiency in coding and software development, with a strong focus on machine learning frameworks. Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Preferred Qualifications 3+ years of building machine learning models or developing algorithms for business application experience Have publications at top-tier peer-reviewed conferences or journals Track record of diving into data to discover hidden patterns and conducting error/deviation analysis Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations Exceptional level of organization and strong attention to detail Comfortable working in a fast paced, highly collaborative, dynamic work environment BASIC QUALIFICATIONS 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing PREFERRED QUALIFICATIONS Experience using Unix/Linux Experience in professional software development Company - ADCI MAA 15 SEZ Job ID: A2654587 Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

I am looking for an experienced resource who has experience in power systems domain and also has knowledge of python language Experience working as a design engineer/manager for a DNO , Developer or Renewable company in EHV systems or HV/LV systems will be given high weightage Preparation of Technical design reports for Distribution sector (11 KV up to 132kV). Working knowledge of Transmission and distribution systems including network protection & Re-Enforcement schemes. Experience in using a power flow modelling tool such as PowerFactory, IPSA, DINIS, PSSE etc will be beneficial. Experience with Digslient PowerFactory and Siemens PSS/E. Strong understanding of node-breaker vs. bus-branch topologies and substation layouts. Familiarity with validation parameters (voltage limits, thermal ratings, angle differences). Good documentation and communication skills for report writing and stakeholder engagement. 5 years of Python development experience in power systems or engineering domains. Proficiency with Python 3.7 and libraries for text parsing, file handling, and interface development. Experience with PowerFactory Python API and COM Automation. Understanding of PSS/E .raw file structures and associated components. Strong troubleshooting and error handling capabilities in model-based simulations. Ability to write clean, modular, and well-documented code. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us: Efficient Capital Labs (ECL) Location: India – Bangalore - Hybrid Job Type: Full-time About Us: Efficient Capital Labs (ECL) is a VC backed, innovative fintech company headquartered in the US with a subsidiary in Bangalore, India. At ECL, our vision is to enable border agnostic access to capital for businesses in emerging markets, such as India, so that they can benefit from lower capital costs that are available in markets such as the U.S. Our mission is to innovate for businesses and solve two of their biggest challenges: access to capital and cost of capital. We offer non-dilutive capital of up to US$2.5M for a fixed annual fee, with a 12-month repayment term. We serve our customers in a fast, seamless and cost-effective manner that does not require them to spend months of time and thousands of dollars in negotiating complex equity raises through preferred stock issuance. Job Summary: We are seeking a detail-oriented and analytical professional to join our team as an Analyst. In this role, you will be responsible for reviewing and interpreting bank statements to assess financial health, identify patterns, detect irregularities, and contribute to credit and risk decision-making. And accurately extracting, interpreting, and inputting financial data from customer-provided documents into internal credit models and systems. You will work closely with underwriting, data science, and product teams to improve automation and accuracy in financial evaluations. Key Responsibilities: Banking : Analyze customer bank statements to extract and validate key financial metrics such as income, expenses, cash flow, overdrafts, and transaction trends. • Identify financial risks, anomalies, and inconsistencies that may impact lending or credit decisions. • Collaborate with underwriting teams to assess borrower eligibility and creditworthiness. • Use bank statement parsing tools and financial data platforms (e.g., Plaid, Teller) to streamline analysis. • Document findings clearly and maintain accurate records in compliance with regulatory requirements. • Support the automation of financial data extraction and contribute to the refinement of underwriting models. • Work with product and engineering teams to enhance the accuracy and usability of bank statement analysis tools. Assist in fraud detection by identifying suspicious or manipulated financial documents Financial : Spread financial statements (Income Statement, Balance Sheet, and Cash Flow) from borrower-provided documents into internal systems and models. • Normalize data across varying formats including tax returns, bank statements, audited and unaudited financials. • Review and validate financial ratios, trends, and performance indicators used in credit assessments. • Collaborate with credit analysts and underwriters to ensure accurate inputs for risk models. • Maintain financial spreading templates and assist in continuous improvement of processes and tools. • Identify discrepancies or red flags in financial data and escalate appropriately. • Ensure compliance with internal policies, regulatory standards, and data privacy requirements. Requirements : Bachelor’s degree/MBA. • Fresher or experience in financial analysis, underwriting, or a similar role in Fintech, banking, or lending. • Strong understanding of financial statements and transactional data. • Familiarity with digital bank statement formats and aggregation tools (e.g., Plaid, Teller etc). • Proficiency in Microsoft Excel or Google Sheets; experience with SQL or Python is a plus. • Strong analytical and critical thinking skills with attention to detail. • Excellent written and verbal communication skills. Nice to Have: • Prior experience in small business lending, personal finance platforms, or digital banking. • Exposure to machine learning or AI-based financial document analysis tools. • Knowledge of US financial regulations, lending standards, and consumer protection policies. What We Offer: Competitive salary and benefits package Opportunity to work with a talented team of professionals Collaborative and dynamic work environment - Professional growth and development opportunities How to Apply: If you're a motivated and experienced technical leader looking for a new challenge, please submit your resume and cover letter to nithanth@ecaplabs.com Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Python Developer – AI Agent Development (CrewAI + LangChain) Location: Noida / Gwalior (On-site) Experience Required: Minimum 3+ years Employment Type: Full-time 🚀 About the Role We're seeking a Python Developer with hands-on experience in CrewAI and LangChain to join our cutting-edge AI product engineering team. If you thrive at the intersection of LLMs, agentic workflows, and autonomous tooling — this is your opportunity to build real-world AI agents that solve complex problems at scale. You’ll be responsible for designing, building, and deploying intelligent agents that leverage prompt engineering, memory systems, vector databases, and multi-step tool execution strategies. 🧠 Core Responsibilities Design and develop modular, asynchronous Python applications using clean code principles. Build and orchestrate intelligent agents using CrewAI: defining agents, tasks, memory, and crew dynamics. Develop custom chains and tools using LangChain (LLMChain, AgentExecutor, memory, structured tools). Implement prompt engineering techniques like ReAct, Few-Shot, and Chain-of-Thought reasoning. Integrate with APIs from OpenAI, Anthropic, HuggingFace, or Mistral for advanced LLM capabilities. Use semantic search and vector stores (FAISS, Chroma, Pinecone, etc.) to build RAG pipelines. Extend tool capabilities: web scraping, PDF/document parsing, API integrations, and file handling. Implement memory systems for persistent, contextual agent behavior. Leverage DSA and algorithmic skills to structure efficient reasoning and execution logic. Deploy containerized applications using Docker, Git, and modern Python packaging tools. 🛠️ Must-Have Skills Python 3.x (Async, OOP, Type Hinting, Modular Design) CrewAI (Agent, Task, Crew, Memory, Orchestration) – Must Have LangChain (LLMChain, Tools, AgentExecutor, Memory) Prompt Engineering (Few-Shot, ReAct, Dynamic Templates) LLMs & APIs (OpenAI, HuggingFace, Anthropic) Vector Stores (FAISS, Chroma, Pinecone, Weaviate) Retrieval-Augmented Generation (RAG) Pipelines Memory Systems: BufferMemory, ConversationBuffer, VectorStoreMemory Asynchronous Programming (asyncio, LangChain hooks) DSA / Algorithms (Graphs, Queues, Recursion, Time/Space Optimization) 💡 Bonus Skills Experience with Machine Learning libraries (Scikit-learn, XGBoost, TensorFlow basics) Familiarity with NLP concepts (Embeddings, Tokenization, Similarity scoring) DevOps familiarity (Docker, GitHub Actions, Pipenv/Poetry) 🧭 Why Join Us? Work on cutting-edge LLM agent architecture with real-world impact. Be part of a fast-paced, experiment-driven AI team. Collaborate with passionate developers and AI researchers. Opportunity to build from scratch and influence core product design. If you're passionate about building AI systems that can reason, act, and improve autonomously — we’d love to hear from you! 📩 Drop your resume and GitHub to sameer.khan@techcarrel.com. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Experience: 7+ to 10 Yrs Notice Period: Immediate joiners Work Timings: Normal working Hours Location: Gurgaon, Work from office -Hybrid mode, client location As Lead Splunk, Your Role And Responsibilities Would Include Hands on experience in the SIEM domain Expert knowledge on splunk> Backend operations (UF, HF, SH and Indexer Cluster) and architecture Expert knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. Expert in Logs/License optimization techniques and strategy. Good Understanding of Designing, Deployment & Implementation of a scalable SIEM Architecture. Understanding of data parsimony as a concept, especially in terms of German data security standards. Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. Expert in understanding the Network Architecture and identifying the components of impact. Expert in Linux Administration. Proficient in working with Syslog. Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Skills: integration with 3rd party tools,python,log management,logs optimization,documentation,security,siem architecture design,parsing,oem siem tools,linux administration,normalization,log collection,syslog,powershell,bash,security logs identification,siem,retention practices,data parsimony,splunk Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Role- Search Architect Required Technical Skill Set- Bachelor’s degree with relevant experience and expertise Experience in development of AWS Open Search, Autosuggest functionality, Search Wrapper. Data transportation and parsing, Improve Search relevance and other search features, Designing/Implementation of search API Desired Experience Range- Minimum 10 years of work experience in enterprise web application design and development with previous 5+ years’ experience as a Search Architect or Search lead role Key skills- AWS OPEN SEARCH, Elasticsearch Key Responsibilities • Design, develop and optimize search architecture using Elasticsearch/OpenSearch to enhance search accuracy, performance, and scalability. • Implement Java-based microservices for search related functionalities. • Implement indexing strategies, data pipelines and real-time search capabilities for large scale e-commerce platform. Skills & Qualifications • 10+ years of experience in search architecture, or related domains. • Strong expertise in Elasticsearch/OpenSearch, including indexing, querying, tuning, and scaling. • Proficiency in Java (Spring Boot, Microservices). • Experience in ranking algorithms, query optimization, and semantic search. • Knowledge of data modeling, distributed systems, and caching strategies. • Experience with ML-based search ranking and recommendation systems is a plus. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Strong knowledge of Splunk architecture, components, and deployment models (standalone, distributed, or clustered) Hands-on experience with Splunk forwarders, search processing, and index clustering Proficiency in writing SPL (Search Processing Language) queries and creating dashboards Familiarity with Linux/Unix systems and basic scripting (e.g., Bash, Python) Understanding of networking concepts and protocols (TCP/IP, syslog) We are looking for a Splunk architect to join our dynamic team. In this hybrid role, you will leverage your expertise in Python programming to develop innovative solutions while harnessing the power of Splunk for data analysis, monitoring, and automation. This position is ideal for a problem-solver passionate about integrating programming with operational intelligence tools to drive efficiency and insights across the organization. Key Responsibilities Deploy Splunk Enterprise or Splunk Cloud on servers or virtual environments. Configure indexing and search head clusters for data collection and search functionalities. Deploy universal or heavy forwarders to collect data from various sources and send it to the Splunk environment Configure data inputs (e.g., syslogs, snmp, file monitoring) and outputs (e.g., storage, dashboards) Identify and onboard data sources such as logs, metrics, and events. Use regular expressions or predefined methods to extract fields from raw data Configure props.conf and transforms.conf for data parsing and enrichment. Create and manage indexes to organize and control data storage. Configure roles and users with appropriate permissions using role-based access control (RBAC). Integrate Splunk with external authentication systems like LDAP, SAML, or Active Directory Monitor user activities and changes to the Splunk environment Optimize Splunk for better search performance and resource utilization Regularly monitor the status of indexers, search heads, and forwarders Configure backups for configurations and indexed data Diagnose and resolve issues like data ingestion failures, search slowness, or system errors. Install and manage apps and add-ons from Splunkbase or custom-built solutions. Create python scripts for automation and advanced data processing. Use KV stores for dynamic data storage and retrieval within Splunk Plan and execute Splunk version upgrades Regularly update apps and add-ons to maintain compatibility and security Ensure the underlying operating system and dependencies are up-to-date. Integrate Splunk with ITSM tools (e.g., ServiceNow), monitoring tools, or CI/CD pipelines. Use Splunk's REST API for automation and custom integrations Good to have Splunk Core Certified Admin certification Splunk Development and Administration Build and optimize complex SPL (Search Processing Language) queries for dashboards, reports, and alerts. Develop and manage Splunk apps and add-ons, including custom Python scripts for data ingestion and enrichment. Onboard and validate data sources in Splunk, ensuring proper parsing, indexing, and field extractions. Integration and Automation Leverage Python to automate Splunk administrative tasks such as monitoring, data onboarding, and alerting. Integrate Splunk with third-party tools, systems, and APIs (e.g., ServiceNow, cloud platforms, or in-house solutions). Develop custom connectors to stream data between Splunk and other platforms or databases. Data Analysis and Insights Collaborate with stakeholders to extract actionable insights from log data and metrics using Splunk. Create advanced visualizations and dashboards to highlight key trends and anomalies. Assist in root cause analysis for performance bottlenecks or operational incidents. System Optimization and Security Enhance Splunk search performance through Python-driven optimizations and configurations. Implement security best practices in both Python code and Splunk setups, ensuring compliance with regulatory standards. Perform regular Splunk system health checks and troubleshoot issues related to data ingestion or indexing. Collaboration and Mentoring Work closely with DevOps, Security, and Data teams to align Splunk solutions with business needs. Mentor junior developers or administrators in Python and Splunk best practices. Document processes, solutions, and configurations for future reference. Python Development: Proficient in Python 3.x, with experience in libraries such as Pandas, NumPy, Flask/Django, and Requests. Strong understanding of RESTful APIs and data serialization formats (JSON, XML). Experience with version control systems like Git. Design, develop, and maintain robust Python scripts, applications, and APIs to support automation, data processing, and integration workflows. Create reusable modules and libraries to simplify recurring tasks and enhance scalability. Debug, optimize, and document Python code to ensure high performance and maintainability. Splunk Expertise: Hands-on experience in Splunk development, administration, and data onboarding. Proficiency in SPL (Search Processing Language) for creating advanced searches, dashboards, and alerts. Familiarity with props.conf and transforms.conf configurations. Other Skills: Knowledge of Linux/Unix environments, including scripting (Bash/PowerShell). Understanding of networking protocols (TCP/IP, syslog) and log management concepts. Experience with cloud platforms (AWS, Azure, or GCP) and integrating Splunk in hybrid environments. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a passionate and skilled iOS Developer with 3+ years of hands-on experience to join our dynamic team. The ideal candidate should have a strong foundation in Swift, Objective-C, and iOS frameworks, and be eager to contribute to building high-performance mobile applications with great user experience. Key Responsibilities Design, develop, and maintain iOS applications using Swift and Objective-C. Implement features using iOS frameworks like UIKit, Core Data, and Core Animation. Integrate RESTful APIs and handle JSON data parsing and formatting. Follow Apple’s Human Interface Guidelines to build intuitive UI/UX. Use Git for version control and collaborate with other developers. Debug, troubleshoot, and resolve technical issues efficiently. Participate in Agile development sprints and team planning sessions. Learn and adopt modern tools like SwiftUI and Combine. Ensure secure coding practices and app compliance with mobile security standards. Use analytics platforms such as Firebase or Google Analytics for tracking app performance. Stay updated on iOS development best practices and trends. Qualifications & Skills 3+ years of professional experience in iOS development. Proficiency in Swift and working knowledge of Objective-C. Strong understanding of key iOS frameworks (UIKit, Core Data, etc.). Experience in consuming RESTful APIs and working with JSON. Familiarity with Apple’s Human Interface Guidelines. Proficient with Git and source control best practices. Solid debugging and problem-solving skills. Exposure to SwiftUI and Combine (preferred but not required). Basic understanding of mobile app security standards. Familiarity with CI/CD pipelines (a plus). Bonus: Experience or interest in working with video streaming or DRM technologies. Strong communication skills and a team-oriented mindset. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Desired Competencies (Technical/Behavioral Competency) Exp Range - 6-9 yrs Hiring Location - Chn/Blr Desired Competencies (Technical/Behavioral Competency) Must-Have Job Summary: We are seeking a skilled and details-oriented Spunk Administrator to join our IT team. This is role is critical to the success of our newly implemented Splunk monitoring environment. The ideal candidate will be responsible for onboarding the new applications into Splunk, ensuring log data is correctly ingested, indexed and visualized in alignment with operational and Security alignment. Key responsibilities: Work closely with application owners, developers, and infrastructure teams to understand the logging requirements. Implement log ingestion pipeline using defined FCB process. Develop and configure Data inputs, parsing field extractions and source types. Onboard application using universal forwarders, syslog or defined ingestion methods. Ensure compliance with data onboarding standards and naming conventions Update and maintain technical documentation related to onboarding procedures and data sources. Configure indexes, inputs, prop/transforms and needed for new data sources. Monitor data ingestion, health and troubleshoot onboarding issues. Collaborate with Splunk engineering team and security teams to optimize data usage and performance. Assist in building and deploying dashboards, alerts and reports to support operational visibility. Perform the regular health checks of the application and report any discrepancies. Required skills: Relevant experience in Splunk administration in a mid-to-large enterprise environment Strong knowledge of log formats , ingestion techniques and Splunk configuration files ( Inputs.conf, Props.conf, Tranforms.conf,etc..) Experience in onboarding application using forwarders, syslog. Scripting skills (Python, Powershell) Good understanding of Networking concepts and log sources such as firewall, operating system, middleware and cloud services. Ability to work independently and in a cross functional team environment. Excellent documentation, communication and troubleshooting skills. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key Responsibilities Include Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. Engage in effective technical communication (written & spoken) with coordination across teams. Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. Publish research papers in internal and external venues of repute Support on-call activities for critical issues Basic Qualifications Master’s or PhD in computer science, statistics or a related field 2-7 years experience in deep learning, machine learning, and data science. Proficiency in coding and software development, with a strong focus on machine learning frameworks. Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Papers published in AI/ML venues of repute Preferred Qualifications Track record of diving into data to discover hidden patterns and conducting error/deviation analysis Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations The motivation to achieve results in a fast-paced environment. Exceptional level of organization and strong attention to detail Comfortable working in a fast paced, highly collaborative, dynamic work environment Basic Qualifications 3+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Knowledge of standard speech and machine learning techniques Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2991773 Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

This role is eligible for our hybrid work model: Two days in-office. Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place. Why This Job’s a Big Deal Join an Agile team of professionals that are instrumental in building the next generation of travel applications. We constantly explore new technologies and engineer better solutions for ever demanding business needs. Our team of engineers, at all levels, work with the business leaders in defining the product roadmap and come up with innovative solutions to grow the future of travel. We design and develop our back end systems and REST APIs that serve hundreds of millions of searches a day, collecting and parsing data across thousands of partners to get the best deals for our customers. In This Role You Will Get To Participate in mission critical projects with direct impact on the evolution of Priceline's business. Be part of a cross-functional agile team that continuously experiments, iterates and delivers on new product objectives. Showcase your development skills of Core Java or similar programming languages. Apply your programming skills towards building low latency and high throughput transactional services with continuous integration and automation testing. Implement SQL composition skills that collects and queries data for investigation and analysis in real time from our applications. Utilize your knowledge to understand our codebase, systems and business requirements to effectively make changes to our applications. Effectively collaborate and engage in team efforts, speak up for what you think are the best solutions and be able to converse respectfully and compromise when necessary. Who You Are Bachelor’s degree or higher in Computer Science or related field. 3+ years of experience in software engineering and development. Strong coding experience with Core Java Thorough SQL compositions skills for composing queries and analysis. Comfort and experience with Spring boot and REST APIs. . Experience in Microservices is a MUST Experience with developing on Cloud, especially GCP OR AWS/ Azure. Illustrated history of living the values necessary to Priceline: Customer, Innovation, Team, Accountability and Trust. The Right Results, the Right Way is not just a motto at Priceline; it’s a way of life. Unquestionable integrity and ethics is essential. Who We Are WE ARE PRICELINE. Our success as one of the biggest players in online travel is all thanks to our incredible, dedicated team of talented employees. Priceliners are focused on being the best travel deal makers in the world, motivated by our passion to help everyone experience the moments that matter most in their lives. Whether it’s a dream vacation, your cousin’s graduation, or your best friend’s wedding - we make travel affordable and accessible to our customers. Our culture is unique and inspiring (that’s what our employees tell us). We’re a grown-up, startup. We deliver the excitement of a new venture, without the struggles and chaos that can come with a business that hasn’t stabilized. We’re on the cutting edge of innovative technologies. We keep the customer at the center of all that we do. Our ability to meet their needs relies on the strength of a workforce as diverse as the customers we serve. We bring together employees from all walks of life and we are proud to provide the kind of inclusive environment that stimulates innovation, creativity and collaboration. Priceline is part of the Booking Holdings, Inc. (Nasdaq: BKNG) family of companies, a highly profitable global online travel company with a market capitalization of over $80 billion. Our sister companies include Booking.com, BookingGo, Agoda, Kayak and OpenTable. If you want to be part of something truly special, check us out! Flexible work at Priceline Priceline is following a hybrid working model, which includes two days onsite as determined by you and your manager (ideally selecting among Tuesday, Wednesday, or Thursday). On the remaining days, you can choose to be remote or in the office. Diversity and Inclusion are a Big Deal! To be the best travel dealmakers in the world, it’s important we have a workforce that reflects the diverse customers and communities we serve. We are committed to cultivating a culture where all employees have the freedom to bring their individual perspectives, life experiences, and passion to work. Priceline is a proud equal opportunity employer. We embrace and celebrate the unique lenses through which our employees see the world. We’d love you to join us and add to our rich mix! Applying for this position We're excited that you are interested in a career with us. For all current employees , please use the internal portal to find jobs and apply. External candidates are required to have an account before applying. When you click Apply, returning candidates can log in, or new candidates can quickly create an account to save/view applications. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description EU INTech Partner Growth Experience(PGX) is seeking an Applied Scientist to lead the development of machine learning solutions for the EU Consumer Electronics business. In this role, you will push the boundaries of advanced ML techniques and collaborate closely with product and engineering teams to create innovative buying and forecasting solutions for the business. These new models will primarily benefit Smart Retail project that aims to revolutionize CPFR (Collaborative Planning, Forecasting, and Replenishment) Retail operations, driving automation, enhancing decision-making processes, and achieving scale across eligible categories such as PC, Home Entertainment or Wireless. Smart Retail solution is composed of an internal interface automating selection management mechanisms currently performed manually, followed by the creation of a vendor-facing interface on Vendor Central reducing time spent collecting required inputs. The project's key functionalities include (i) a Ranging model operating from category to product attributes level, pre-ASIN creation and when selection is substitutable, (ii) an advanced forecasting model designed for new selection and accounting cannibalization, (iii) ordering inputs optimization in line with for SCOT guideline compliance, and intelligent inventory management for sell-through tracking. Smart Retail success also depends on its integration with existing systems (SCOT) to minimize manual intervention and increase accuracy. Key job responsibilities Design, develop, and deploy advanced machine learning models to address complex, real-world challenges at scale. Build new forecasting and time-series models or enhance existing methods using scalable techniques. Partner with cross-functional teams, including product managers and engineers, to identify impactful opportunities and deliver science-driven solutions. Develop and optimize scalable ML solutions, ensuring seamless production integration and measurable impact on business metrics. Continuously enhance model performance through retraining, parameter tuning, and architecture improvements using Amazon’s extensive data resources. Lead initiatives, mentor junior scientists and engineers, and promote the adoption of ML methodologies across teams. Stay abreast of advancements in ML research, contribute to top-tier publications, and actively engage with the scientific community. Basic Qualifications PhD, or Master's degree and 3+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience in patents or publications at top-tier peer-reviewed conferences or journals 3+ years of hands-on predictive modeling and large data analysis experience Experience working with large-scale distributed systems such as Spark, Sagemaker or similar frameworks Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2873880 Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About The Team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide Basic Qualifications 5+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2759531 Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies