Home
Jobs
Companies
Resume

356 Parsing Jobs - Page 12

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As Lead Splunk, your role and responsibilities would include: Hands on experience in the SIEM domain o Expert knowledge on splunk> Backend operations (UF, HF, SH and Indexer Cluster) and architecture o Expert knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. o Expert in Logs/License optimization techniques and strategy. o Good Understanding of Designing, Deployment & Implementation of a scalable SIEM Architecture . o Understanding of data parsimony as a concept, especially in terms of German data security standards. o Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) o Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. o Expert in understanding the Network Architecture a nd identifying the components of impact. o Expert in Linux Administration. o Proficient in working with Syslog . o Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . o Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

DESCRIPTION DESCRIPTION The Digital Acceleration (DA) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms for solving Digital businesses problems. Key job responsibilities Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML. Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. Engage in effective technical communication (written & spoken) with coordination across teams. Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. Publish research papers in internal and external venues of repute Support on-call activities for critical issues Basic Qualifications Experience building machine learning models or developing algorithms for business application PhD, or a Master's degree and experience in CS, CE, ML or related field Knowledge of programming languages such as C/C++, Python, Java or Perl Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Proficiency in coding and software development, with a strong focus on machine learning frameworks. Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Preferred Qualifications 3+ years of building machine learning models or developing algorithms for business application experience Have publications at top-tier peer-reviewed conferences or journals Track record of diving into data to discover hidden patterns and conducting error/deviation analysis Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations Exceptional level of organization and strong attention to detail Comfortable working in a fast paced, highly collaborative, dynamic work environment BASIC QUALIFICATIONS 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing PREFERRED QUALIFICATIONS Experience using Unix/Linux Experience in professional software development Company - ADCI MAA 15 SEZ Job ID: A2654587 Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

I am looking for an experienced resource who has experience in power systems domain and also has knowledge of python language Experience working as a design engineer/manager for a DNO , Developer or Renewable company in EHV systems or HV/LV systems will be given high weightage Preparation of Technical design reports for Distribution sector (11 KV up to 132kV). Working knowledge of Transmission and distribution systems including network protection & Re-Enforcement schemes. Experience in using a power flow modelling tool such as PowerFactory, IPSA, DINIS, PSSE etc will be beneficial. Experience with Digslient PowerFactory and Siemens PSS/E. Strong understanding of node-breaker vs. bus-branch topologies and substation layouts. Familiarity with validation parameters (voltage limits, thermal ratings, angle differences). Good documentation and communication skills for report writing and stakeholder engagement. 5 years of Python development experience in power systems or engineering domains. Proficiency with Python 3.7 and libraries for text parsing, file handling, and interface development. Experience with PowerFactory Python API and COM Automation. Understanding of PSS/E .raw file structures and associated components. Strong troubleshooting and error handling capabilities in model-based simulations. Ability to write clean, modular, and well-documented code. Show more Show less

Posted 3 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Lead Splunk Engineer Location: Gurgaon (Hybrid) Experience: 7-10 Years Employment Type: Full-time Notice Period: Immediate Joiners Preferred Job Summary: We are seeking an experienced Lead Splunk Engineer to design, deploy, and optimize SIEM solutions with expertise in Splunk architecture, log management, and security event monitoring . The ideal candidate will have hands-on experience in Linux administration, scripting, and integrating Splunk with tools like ELK & DataDog . Key Responsibilities: ✔ Design & deploy scalable Splunk SIEM solutions (UF, HF, SH, Indexer Clusters). ✔ Optimize log collection, parsing, normalization, and retention . ✔ Ensure license & log optimization for cost efficiency. ✔ Integrate Splunk with 3rd-party tools (ELK, DataDog, etc.) . ✔ Develop automation scripts (Python/Bash/PowerShell) . ✔ Create technical documentation (HLD, LLD, Runbooks) . Skills Required: 🔹 Expert in Splunk (Architecture, Deployment, Troubleshooting) 🔹 Strong SIEM & Log Management Knowledge 🔹 Linux/Unix Administration 🔹 Scripting (Python, Bash, PowerShell) 🔹 Experience with ELK/DataDog 🔹 Understanding of German Data Security Standards (GDPR/Data Parsimony) Why Join Us? Opportunity to work with cutting-edge security tools . Hybrid work model (Gurgaon-based). Collaborative & growth-oriented environment . Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us: Efficient Capital Labs (ECL) Location: India – Bangalore - Hybrid Job Type: Full-time About Us: Efficient Capital Labs (ECL) is a VC backed, innovative fintech company headquartered in the US with a subsidiary in Bangalore, India. At ECL, our vision is to enable border agnostic access to capital for businesses in emerging markets, such as India, so that they can benefit from lower capital costs that are available in markets such as the U.S. Our mission is to innovate for businesses and solve two of their biggest challenges: access to capital and cost of capital. We offer non-dilutive capital of up to US$2.5M for a fixed annual fee, with a 12-month repayment term. We serve our customers in a fast, seamless and cost-effective manner that does not require them to spend months of time and thousands of dollars in negotiating complex equity raises through preferred stock issuance. Job Summary: We are seeking a detail-oriented and analytical professional to join our team as an Analyst. In this role, you will be responsible for reviewing and interpreting bank statements to assess financial health, identify patterns, detect irregularities, and contribute to credit and risk decision-making. And accurately extracting, interpreting, and inputting financial data from customer-provided documents into internal credit models and systems. You will work closely with underwriting, data science, and product teams to improve automation and accuracy in financial evaluations. Key Responsibilities: Banking : Analyze customer bank statements to extract and validate key financial metrics such as income, expenses, cash flow, overdrafts, and transaction trends. • Identify financial risks, anomalies, and inconsistencies that may impact lending or credit decisions. • Collaborate with underwriting teams to assess borrower eligibility and creditworthiness. • Use bank statement parsing tools and financial data platforms (e.g., Plaid, Teller) to streamline analysis. • Document findings clearly and maintain accurate records in compliance with regulatory requirements. • Support the automation of financial data extraction and contribute to the refinement of underwriting models. • Work with product and engineering teams to enhance the accuracy and usability of bank statement analysis tools. Assist in fraud detection by identifying suspicious or manipulated financial documents Financial : Spread financial statements (Income Statement, Balance Sheet, and Cash Flow) from borrower-provided documents into internal systems and models. • Normalize data across varying formats including tax returns, bank statements, audited and unaudited financials. • Review and validate financial ratios, trends, and performance indicators used in credit assessments. • Collaborate with credit analysts and underwriters to ensure accurate inputs for risk models. • Maintain financial spreading templates and assist in continuous improvement of processes and tools. • Identify discrepancies or red flags in financial data and escalate appropriately. • Ensure compliance with internal policies, regulatory standards, and data privacy requirements. Requirements : Bachelor’s degree/MBA. • Fresher or experience in financial analysis, underwriting, or a similar role in Fintech, banking, or lending. • Strong understanding of financial statements and transactional data. • Familiarity with digital bank statement formats and aggregation tools (e.g., Plaid, Teller etc). • Proficiency in Microsoft Excel or Google Sheets; experience with SQL or Python is a plus. • Strong analytical and critical thinking skills with attention to detail. • Excellent written and verbal communication skills. Nice to Have: • Prior experience in small business lending, personal finance platforms, or digital banking. • Exposure to machine learning or AI-based financial document analysis tools. • Knowledge of US financial regulations, lending standards, and consumer protection policies. What We Offer: Competitive salary and benefits package Opportunity to work with a talented team of professionals Collaborative and dynamic work environment - Professional growth and development opportunities How to Apply: If you're a motivated and experienced technical leader looking for a new challenge, please submit your resume and cover letter to nithanth@ecaplabs.com Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Python Developer – AI Agent Development (CrewAI + LangChain) Location: Noida / Gwalior (On-site) Experience Required: Minimum 3+ years Employment Type: Full-time 🚀 About the Role We're seeking a Python Developer with hands-on experience in CrewAI and LangChain to join our cutting-edge AI product engineering team. If you thrive at the intersection of LLMs, agentic workflows, and autonomous tooling — this is your opportunity to build real-world AI agents that solve complex problems at scale. You’ll be responsible for designing, building, and deploying intelligent agents that leverage prompt engineering, memory systems, vector databases, and multi-step tool execution strategies. 🧠 Core Responsibilities Design and develop modular, asynchronous Python applications using clean code principles. Build and orchestrate intelligent agents using CrewAI: defining agents, tasks, memory, and crew dynamics. Develop custom chains and tools using LangChain (LLMChain, AgentExecutor, memory, structured tools). Implement prompt engineering techniques like ReAct, Few-Shot, and Chain-of-Thought reasoning. Integrate with APIs from OpenAI, Anthropic, HuggingFace, or Mistral for advanced LLM capabilities. Use semantic search and vector stores (FAISS, Chroma, Pinecone, etc.) to build RAG pipelines. Extend tool capabilities: web scraping, PDF/document parsing, API integrations, and file handling. Implement memory systems for persistent, contextual agent behavior. Leverage DSA and algorithmic skills to structure efficient reasoning and execution logic. Deploy containerized applications using Docker, Git, and modern Python packaging tools. 🛠️ Must-Have Skills Python 3.x (Async, OOP, Type Hinting, Modular Design) CrewAI (Agent, Task, Crew, Memory, Orchestration) – Must Have LangChain (LLMChain, Tools, AgentExecutor, Memory) Prompt Engineering (Few-Shot, ReAct, Dynamic Templates) LLMs & APIs (OpenAI, HuggingFace, Anthropic) Vector Stores (FAISS, Chroma, Pinecone, Weaviate) Retrieval-Augmented Generation (RAG) Pipelines Memory Systems: BufferMemory, ConversationBuffer, VectorStoreMemory Asynchronous Programming (asyncio, LangChain hooks) DSA / Algorithms (Graphs, Queues, Recursion, Time/Space Optimization) 💡 Bonus Skills Experience with Machine Learning libraries (Scikit-learn, XGBoost, TensorFlow basics) Familiarity with NLP concepts (Embeddings, Tokenization, Similarity scoring) DevOps familiarity (Docker, GitHub Actions, Pipenv/Poetry) 🧭 Why Join Us? Work on cutting-edge LLM agent architecture with real-world impact. Be part of a fast-paced, experiment-driven AI team. Collaborate with passionate developers and AI researchers. Opportunity to build from scratch and influence core product design. If you're passionate about building AI systems that can reason, act, and improve autonomously — we’d love to hear from you! 📩 Drop your resume and GitHub to sameer.khan@techcarrel.com. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Experience: 7+ to 10 Yrs Notice Period: Immediate joiners Work Timings: Normal working Hours Location: Gurgaon, Work from office -Hybrid mode, client location As Lead Splunk, Your Role And Responsibilities Would Include Hands on experience in the SIEM domain Expert knowledge on splunk> Backend operations (UF, HF, SH and Indexer Cluster) and architecture Expert knowledge of Log Management and Splunk SIEM. Understanding of log collection, parsing, normalization, and retention practices. Expert in Logs/License optimization techniques and strategy. Good Understanding of Designing, Deployment & Implementation of a scalable SIEM Architecture. Understanding of data parsimony as a concept, especially in terms of German data security standards. Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. Expert in understanding the Network Architecture and identifying the components of impact. Expert in Linux Administration. Proficient in working with Syslog. Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Skills: integration with 3rd party tools,python,log management,logs optimization,documentation,security,siem architecture design,parsing,oem siem tools,linux administration,normalization,log collection,syslog,powershell,bash,security logs identification,siem,retention practices,data parsimony,splunk Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Role- Search Architect Required Technical Skill Set- Bachelor’s degree with relevant experience and expertise Experience in development of AWS Open Search, Autosuggest functionality, Search Wrapper. Data transportation and parsing, Improve Search relevance and other search features, Designing/Implementation of search API Desired Experience Range- Minimum 10 years of work experience in enterprise web application design and development with previous 5+ years’ experience as a Search Architect or Search lead role Key skills- AWS OPEN SEARCH, Elasticsearch Key Responsibilities • Design, develop and optimize search architecture using Elasticsearch/OpenSearch to enhance search accuracy, performance, and scalability. • Implement Java-based microservices for search related functionalities. • Implement indexing strategies, data pipelines and real-time search capabilities for large scale e-commerce platform. Skills & Qualifications • 10+ years of experience in search architecture, or related domains. • Strong expertise in Elasticsearch/OpenSearch, including indexing, querying, tuning, and scaling. • Proficiency in Java (Spring Boot, Microservices). • Experience in ranking algorithms, query optimization, and semantic search. • Knowledge of data modeling, distributed systems, and caching strategies. • Experience with ML-based search ranking and recommendation systems is a plus. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Strong knowledge of Splunk architecture, components, and deployment models (standalone, distributed, or clustered) Hands-on experience with Splunk forwarders, search processing, and index clustering Proficiency in writing SPL (Search Processing Language) queries and creating dashboards Familiarity with Linux/Unix systems and basic scripting (e.g., Bash, Python) Understanding of networking concepts and protocols (TCP/IP, syslog) We are looking for a Splunk architect to join our dynamic team. In this hybrid role, you will leverage your expertise in Python programming to develop innovative solutions while harnessing the power of Splunk for data analysis, monitoring, and automation. This position is ideal for a problem-solver passionate about integrating programming with operational intelligence tools to drive efficiency and insights across the organization. Key Responsibilities Deploy Splunk Enterprise or Splunk Cloud on servers or virtual environments. Configure indexing and search head clusters for data collection and search functionalities. Deploy universal or heavy forwarders to collect data from various sources and send it to the Splunk environment Configure data inputs (e.g., syslogs, snmp, file monitoring) and outputs (e.g., storage, dashboards) Identify and onboard data sources such as logs, metrics, and events. Use regular expressions or predefined methods to extract fields from raw data Configure props.conf and transforms.conf for data parsing and enrichment. Create and manage indexes to organize and control data storage. Configure roles and users with appropriate permissions using role-based access control (RBAC). Integrate Splunk with external authentication systems like LDAP, SAML, or Active Directory Monitor user activities and changes to the Splunk environment Optimize Splunk for better search performance and resource utilization Regularly monitor the status of indexers, search heads, and forwarders Configure backups for configurations and indexed data Diagnose and resolve issues like data ingestion failures, search slowness, or system errors. Install and manage apps and add-ons from Splunkbase or custom-built solutions. Create python scripts for automation and advanced data processing. Use KV stores for dynamic data storage and retrieval within Splunk Plan and execute Splunk version upgrades Regularly update apps and add-ons to maintain compatibility and security Ensure the underlying operating system and dependencies are up-to-date. Integrate Splunk with ITSM tools (e.g., ServiceNow), monitoring tools, or CI/CD pipelines. Use Splunk's REST API for automation and custom integrations Good to have Splunk Core Certified Admin certification Splunk Development and Administration Build and optimize complex SPL (Search Processing Language) queries for dashboards, reports, and alerts. Develop and manage Splunk apps and add-ons, including custom Python scripts for data ingestion and enrichment. Onboard and validate data sources in Splunk, ensuring proper parsing, indexing, and field extractions. Integration and Automation Leverage Python to automate Splunk administrative tasks such as monitoring, data onboarding, and alerting. Integrate Splunk with third-party tools, systems, and APIs (e.g., ServiceNow, cloud platforms, or in-house solutions). Develop custom connectors to stream data between Splunk and other platforms or databases. Data Analysis and Insights Collaborate with stakeholders to extract actionable insights from log data and metrics using Splunk. Create advanced visualizations and dashboards to highlight key trends and anomalies. Assist in root cause analysis for performance bottlenecks or operational incidents. System Optimization and Security Enhance Splunk search performance through Python-driven optimizations and configurations. Implement security best practices in both Python code and Splunk setups, ensuring compliance with regulatory standards. Perform regular Splunk system health checks and troubleshoot issues related to data ingestion or indexing. Collaboration and Mentoring Work closely with DevOps, Security, and Data teams to align Splunk solutions with business needs. Mentor junior developers or administrators in Python and Splunk best practices. Document processes, solutions, and configurations for future reference. Python Development: Proficient in Python 3.x, with experience in libraries such as Pandas, NumPy, Flask/Django, and Requests. Strong understanding of RESTful APIs and data serialization formats (JSON, XML). Experience with version control systems like Git. Design, develop, and maintain robust Python scripts, applications, and APIs to support automation, data processing, and integration workflows. Create reusable modules and libraries to simplify recurring tasks and enhance scalability. Debug, optimize, and document Python code to ensure high performance and maintainability. Splunk Expertise: Hands-on experience in Splunk development, administration, and data onboarding. Proficiency in SPL (Search Processing Language) for creating advanced searches, dashboards, and alerts. Familiarity with props.conf and transforms.conf configurations. Other Skills: Knowledge of Linux/Unix environments, including scripting (Bash/PowerShell). Understanding of networking protocols (TCP/IP, syslog) and log management concepts. Experience with cloud platforms (AWS, Azure, or GCP) and integrating Splunk in hybrid environments. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We are looking for a passionate and skilled iOS Developer with 3+ years of hands-on experience to join our dynamic team. The ideal candidate should have a strong foundation in Swift, Objective-C, and iOS frameworks, and be eager to contribute to building high-performance mobile applications with great user experience. Key Responsibilities Design, develop, and maintain iOS applications using Swift and Objective-C. Implement features using iOS frameworks like UIKit, Core Data, and Core Animation. Integrate RESTful APIs and handle JSON data parsing and formatting. Follow Apple’s Human Interface Guidelines to build intuitive UI/UX. Use Git for version control and collaborate with other developers. Debug, troubleshoot, and resolve technical issues efficiently. Participate in Agile development sprints and team planning sessions. Learn and adopt modern tools like SwiftUI and Combine. Ensure secure coding practices and app compliance with mobile security standards. Use analytics platforms such as Firebase or Google Analytics for tracking app performance. Stay updated on iOS development best practices and trends. Qualifications & Skills 3+ years of professional experience in iOS development. Proficiency in Swift and working knowledge of Objective-C. Strong understanding of key iOS frameworks (UIKit, Core Data, etc.). Experience in consuming RESTful APIs and working with JSON. Familiarity with Apple’s Human Interface Guidelines. Proficient with Git and source control best practices. Solid debugging and problem-solving skills. Exposure to SwiftUI and Combine (preferred but not required). Basic understanding of mobile app security standards. Familiarity with CI/CD pipelines (a plus). Bonus: Experience or interest in working with video streaming or DRM technologies. Strong communication skills and a team-oriented mindset. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Desired Competencies (Technical/Behavioral Competency) Exp Range - 6-9 yrs Hiring Location - Chn/Blr Desired Competencies (Technical/Behavioral Competency) Must-Have Job Summary: We are seeking a skilled and details-oriented Spunk Administrator to join our IT team. This is role is critical to the success of our newly implemented Splunk monitoring environment. The ideal candidate will be responsible for onboarding the new applications into Splunk, ensuring log data is correctly ingested, indexed and visualized in alignment with operational and Security alignment. Key responsibilities: Work closely with application owners, developers, and infrastructure teams to understand the logging requirements. Implement log ingestion pipeline using defined FCB process. Develop and configure Data inputs, parsing field extractions and source types. Onboard application using universal forwarders, syslog or defined ingestion methods. Ensure compliance with data onboarding standards and naming conventions Update and maintain technical documentation related to onboarding procedures and data sources. Configure indexes, inputs, prop/transforms and needed for new data sources. Monitor data ingestion, health and troubleshoot onboarding issues. Collaborate with Splunk engineering team and security teams to optimize data usage and performance. Assist in building and deploying dashboards, alerts and reports to support operational visibility. Perform the regular health checks of the application and report any discrepancies. Required skills: Relevant experience in Splunk administration in a mid-to-large enterprise environment Strong knowledge of log formats , ingestion techniques and Splunk configuration files ( Inputs.conf, Props.conf, Tranforms.conf,etc..) Experience in onboarding application using forwarders, syslog. Scripting skills (Python, Powershell) Good understanding of Networking concepts and log sources such as firewall, operating system, middleware and cloud services. Ability to work independently and in a cross functional team environment. Excellent documentation, communication and troubleshooting skills. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key Responsibilities Include Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. Engage in effective technical communication (written & spoken) with coordination across teams. Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. Publish research papers in internal and external venues of repute Support on-call activities for critical issues Basic Qualifications Master’s or PhD in computer science, statistics or a related field 2-7 years experience in deep learning, machine learning, and data science. Proficiency in coding and software development, with a strong focus on machine learning frameworks. Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Papers published in AI/ML venues of repute Preferred Qualifications Track record of diving into data to discover hidden patterns and conducting error/deviation analysis Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations The motivation to achieve results in a fast-paced environment. Exceptional level of organization and strong attention to detail Comfortable working in a fast paced, highly collaborative, dynamic work environment Basic Qualifications 3+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Knowledge of standard speech and machine learning techniques Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2991773 Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

This role is eligible for our hybrid work model: Two days in-office. Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place. Why This Job’s a Big Deal Join an Agile team of professionals that are instrumental in building the next generation of travel applications. We constantly explore new technologies and engineer better solutions for ever demanding business needs. Our team of engineers, at all levels, work with the business leaders in defining the product roadmap and come up with innovative solutions to grow the future of travel. We design and develop our back end systems and REST APIs that serve hundreds of millions of searches a day, collecting and parsing data across thousands of partners to get the best deals for our customers. In This Role You Will Get To Participate in mission critical projects with direct impact on the evolution of Priceline's business. Be part of a cross-functional agile team that continuously experiments, iterates and delivers on new product objectives. Showcase your development skills of Core Java or similar programming languages. Apply your programming skills towards building low latency and high throughput transactional services with continuous integration and automation testing. Implement SQL composition skills that collects and queries data for investigation and analysis in real time from our applications. Utilize your knowledge to understand our codebase, systems and business requirements to effectively make changes to our applications. Effectively collaborate and engage in team efforts, speak up for what you think are the best solutions and be able to converse respectfully and compromise when necessary. Who You Are Bachelor’s degree or higher in Computer Science or related field. 3+ years of experience in software engineering and development. Strong coding experience with Core Java Thorough SQL compositions skills for composing queries and analysis. Comfort and experience with Spring boot and REST APIs. . Experience in Microservices is a MUST Experience with developing on Cloud, especially GCP OR AWS/ Azure. Illustrated history of living the values necessary to Priceline: Customer, Innovation, Team, Accountability and Trust. The Right Results, the Right Way is not just a motto at Priceline; it’s a way of life. Unquestionable integrity and ethics is essential. Who We Are WE ARE PRICELINE. Our success as one of the biggest players in online travel is all thanks to our incredible, dedicated team of talented employees. Priceliners are focused on being the best travel deal makers in the world, motivated by our passion to help everyone experience the moments that matter most in their lives. Whether it’s a dream vacation, your cousin’s graduation, or your best friend’s wedding - we make travel affordable and accessible to our customers. Our culture is unique and inspiring (that’s what our employees tell us). We’re a grown-up, startup. We deliver the excitement of a new venture, without the struggles and chaos that can come with a business that hasn’t stabilized. We’re on the cutting edge of innovative technologies. We keep the customer at the center of all that we do. Our ability to meet their needs relies on the strength of a workforce as diverse as the customers we serve. We bring together employees from all walks of life and we are proud to provide the kind of inclusive environment that stimulates innovation, creativity and collaboration. Priceline is part of the Booking Holdings, Inc. (Nasdaq: BKNG) family of companies, a highly profitable global online travel company with a market capitalization of over $80 billion. Our sister companies include Booking.com, BookingGo, Agoda, Kayak and OpenTable. If you want to be part of something truly special, check us out! Flexible work at Priceline Priceline is following a hybrid working model, which includes two days onsite as determined by you and your manager (ideally selecting among Tuesday, Wednesday, or Thursday). On the remaining days, you can choose to be remote or in the office. Diversity and Inclusion are a Big Deal! To be the best travel dealmakers in the world, it’s important we have a workforce that reflects the diverse customers and communities we serve. We are committed to cultivating a culture where all employees have the freedom to bring their individual perspectives, life experiences, and passion to work. Priceline is a proud equal opportunity employer. We embrace and celebrate the unique lenses through which our employees see the world. We’d love you to join us and add to our rich mix! Applying for this position We're excited that you are interested in a career with us. For all current employees , please use the internal portal to find jobs and apply. External candidates are required to have an account before applying. When you click Apply, returning candidates can log in, or new candidates can quickly create an account to save/view applications. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description EU INTech Partner Growth Experience(PGX) is seeking an Applied Scientist to lead the development of machine learning solutions for the EU Consumer Electronics business. In this role, you will push the boundaries of advanced ML techniques and collaborate closely with product and engineering teams to create innovative buying and forecasting solutions for the business. These new models will primarily benefit Smart Retail project that aims to revolutionize CPFR (Collaborative Planning, Forecasting, and Replenishment) Retail operations, driving automation, enhancing decision-making processes, and achieving scale across eligible categories such as PC, Home Entertainment or Wireless. Smart Retail solution is composed of an internal interface automating selection management mechanisms currently performed manually, followed by the creation of a vendor-facing interface on Vendor Central reducing time spent collecting required inputs. The project's key functionalities include (i) a Ranging model operating from category to product attributes level, pre-ASIN creation and when selection is substitutable, (ii) an advanced forecasting model designed for new selection and accounting cannibalization, (iii) ordering inputs optimization in line with for SCOT guideline compliance, and intelligent inventory management for sell-through tracking. Smart Retail success also depends on its integration with existing systems (SCOT) to minimize manual intervention and increase accuracy. Key job responsibilities Design, develop, and deploy advanced machine learning models to address complex, real-world challenges at scale. Build new forecasting and time-series models or enhance existing methods using scalable techniques. Partner with cross-functional teams, including product managers and engineers, to identify impactful opportunities and deliver science-driven solutions. Develop and optimize scalable ML solutions, ensuring seamless production integration and measurable impact on business metrics. Continuously enhance model performance through retraining, parameter tuning, and architecture improvements using Amazon’s extensive data resources. Lead initiatives, mentor junior scientists and engineers, and promote the adoption of ML methodologies across teams. Stay abreast of advancements in ML research, contribute to top-tier publications, and actively engage with the scientific community. Basic Qualifications PhD, or Master's degree and 3+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience in patents or publications at top-tier peer-reviewed conferences or journals 3+ years of hands-on predictive modeling and large data analysis experience Experience working with large-scale distributed systems such as Spark, Sagemaker or similar frameworks Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2873880 Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About The Team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide Basic Qualifications 5+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2759531 Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description The Amazon Last Mile Geospatial team builds systems that model the real world to enable routing for drivers. We build, maintain and vend base map data, road network data, map tiles, geocodes of addresses and time estimates for service as well as transit times. We also provide a shortest path service to find fastest paths between locations and a service to optimize consolidation of stops. Together these systems help us get better at determining the locations that we go to deliver packages, figure out how to get to those locations and to estimate the effort of delivery for planning. While it may be easy to say “Why build yet another Maps?” as a first reaction, as we go deeper into our problems, the answer becomes increasingly clear and challenging. We are building systems that enable depth focused solutions. For example, we are interested in not only getting a person to an address like 300 Boren Ave N, we are also interested in helping them find out if there is a mailing room in the building and if there is, helping them navigate quickly to that mailing room. We are also interested in accurately estimating how long it would take to arrive at the address, find the mailing room and drop a package there. We will incorporate the ability to leverage mass transit, multiple modes of transportation and traffic awareness to find the most efficient paths for our drivers. We are also interested in making it easy to calculate paths on cheap mobile devices or in simplifying the process to find an efficient path to cover hundreds of delivery points. Several of these problems require us in building systems that can work with an ensemble of models as well as support the right segmentation of inputs to make good estimates on the outputs. There are several unsolved or partially solved problems in this space such as automatically adding new roads detected from sensor/video data into the larger road graph, deterministically detecting if a new road is in fact just a modification to an existing road (such as a change in curvature of an existing road due to a new sidewalk), accurately determining the bearing of a person when they start traveling leveraging only a single and single IMU sensor source, parsing unstructured addresses such as in countries like India, processing alternate solutions within microseconds on a mobile device without talking to a backend service and so on. The right person for this space would enjoy working in a space that requires constantly pushing both the research and technology boundaries to unlock solutions to such problems. Our key output metrics include location accuracy, coverage and accuracy of our road network for routing and users to the correct location, predictive accuracy of service and transit estimates. We also measure the operational impact of these inputs on delivery success and on the gaps between actual versus planned on-zone times, transit times and service times. If you have an entrepreneurial spirit, know how to deliver, are deeply technical, highly innovative and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. #lastmile #maps_intelligence #sensor_intelligence Key job responsibilities Participate in the design, implementation, and deployment of successful large-scale systems and services in support of our fulfillment operations and the businesses they support. Participate in the definition of secure, scalable, and low-latency services and efficient physical processes. Work in expert cross-functional teams delivering on demanding projects. Functionally decompose complex problems into simple, straight-forward solutions. Understand system interdependencies and limitations. Share knowledge in performance, scalability, enterprise system architecture, and engineering best practices. Basic Qualifications Bachelor's degree in computer science or equivalent 2+ years of non-internship professional software development experience 2+ years of programming using a modern programming language such as Java, C++, or C#, including object-oriented design experience 1+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Preferred Qualifications Experience building complex software systems that have been successfully delivered to customers Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2895144 Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Job description Pranatree LLC is seeking an experienced LLM Scientist to join our team, focusing on the development, fine-tuning, and deployment of cutting-edge Large Language Models (LLMs) and Generative AI solutions. This role requires a strong foundation in data science, AI, and software engineering, with a particular emphasis on building innovative solutions for complex, unstructured problems. The position is remote and involves working on high-impact projects that push the boundaries of what LLMs can achieve. Responsibilities LLM Fine-Tuning and Development: Research, fine-tune, and deploy LLMs, optimizing their capabilities to solve diverse AI problems. Generative AI Solutions: Leverage advanced AI and machine learning techniques to build prototypes and production-grade solutions for various use cases. RAG Pipelines: Design and implement Retrieval-Augmented Generation pipelines to improve the accuracy and relevance of AI outputs. Prototyping: Build and demonstrate working prototypes using frameworks such as Streamlit or Dash. Algorithm and Data Structures Expertise: Apply strong knowledge of algorithms and data structures to parse and process unstructured data effectively. System Performance and Debugging: Profile, debug, and optimize machine learning systems for scalability and performance in large-scale environments. Collaboration and Problem Solving: Work collaboratively with data scientists, engineers, and cross-functional teams to solve unstructured, complex problems using AI. Communication: Communicate technical concepts and project updates clearly and effectively to stakeholders. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Minimum of 5 years of experience in software development, including relevant work in AI and machine learning. Technical Skills Programming Proficiency: Strong coding skills in Python, Java, or C++. Data Science and AI: Solid foundation in data science and experience with Generative AI. LLM Expertise: Experience with fine-tuning large language models and familiarity with RAG pipelines. Frameworks for Prototyping: Proficiency in building prototypes using Streamlit, Dash, or similar app frameworks. Algorithms and Data Structures: Strong background in algorithms, data structures, and parsing unstructured data. System Performance: Experience with profiling, debugging, and ensuring the scalability of machine learning systems. Preferred Experience: Hands-on experience with Generative AI and fine-tuning LLMs. Soft Skills Strong communication skills to convey complex technical ideas. Problem-solving ability, especially in unstructured and ambiguous scenarios. Performance Expectations Develop and fine-tune LLMs for various applications, ensuring high performance and reliability. Design and deploy prototypes that showcase practical uses of LLMs and Generative AI. Collaborate effectively within a multidisciplinary team to develop innovative AI-driven solutions. Ensure scalable and high-performance AI systems through effective profiling and debugging practices. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

About This Role Required Experience: 4-7 years of experience in financial services, with the following capabilities: Analytical approach Basic or intermediate coding and programming skills Verbal and written communication Critical thinking Multitasking and time management Purpose And Scope Every day, Client Success Specialists tackles the hardest, most sophisticated problems in FinTech. We apply our in-depth understanding of Aladdin, our clients’ businesses, and the investment management process to provide world-class service to our growing, global client base. Our team members come from different majors and bring diverse skills and experiences to the table, but we share a serious passion for solving tough problems and keeping our clients happy. Our team is known for being industry experts with a reputation for getting the job done. As a team of 170+ strong globally, we… Deliver outstanding client service to users, every time Solve difficult problems by providing innovative solutions Collaborate with others because we know we can do more together Learn every day, question everything, and embrace change Foster a fun, innovative team environment Key Responsibilities Provide hands-on service to empower our clients to run their businesses on Aladdin: You will have direct, daily interactions with industry practitioners at respected investment institutions. You will solve problems that matter, making direct and measurable difference to our clients. In the process, you will hone technical, industry, and relationship skills. Use technology to solve problems: We can teach the skills you need to succeed, such as SQL and UNIX, for maneuvering relational databases and parsing product logs. You will apply these skills to help client and product teams make Aladdin better. Educate users, demonstrate service insights, and relay user feedback to improve the client experience and our product: We believe that the best client service is proactive, not reactive. We are students of our own data and engage with our clients and engineers to keep problems from arising, in addition to handling issues that are brought to our attention. Work on a global team, with a local presence: Our support model follows the sun – if a market is open somewhere in the world, so are we. You will get to work with teams across the world, while engaging with a vibrant local team. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 3 weeks ago

Apply

200.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Description Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work in our team to partner with the Business to provide a comprehensive view. As an Analyst/Associate within Markets team, you will be a self-starter and self-motivated individual as the business is based on devising bespoke mandates. This role requires you to demonstrate a range of skills from effective process oversight and controls, effective communication to analytical problem solving and teamwork Job Responsibilities Implementing and monitoring investment restrictions and guidelines; reporting breaches and maintaining records. Monitoring global exposure, liquidity, market, credit risk, validation of VaR models and other risk metrics. Oversight of daily tasks and processes and investigating escalation alerts from portfolio managers, middle office, compliance. Automating manual Excel-based monitoring / reporting processes to improve efficiencies across the Internal Control book of work Continuing the migration from legacy Excel tools into a modern UI Streamlining regulatory / client reporting Lead and participate in ad-hoc projects as needed by senior management Assist with the preparation of management information for committees and management meetings. Required Qualifications, Capabilities, And Skills Relevant experience in investment compliance or investment risk in asset management is crucial to the role Experience in coding (preference on Python), specifically strong knowledge of data parsing & storage, as well as statistical, analytical (or machine learning) libraries Translate business needs into quantitative analyses and tools; communicate complex results to senior stakeholders in a clear and precise manner Able to work independently and collaboratively to problem solve, and knowing when to escalate Clearly document the code of the tools created; Publish and maintain clear user documentation Strong quantitative and analytical skills and strong communication skills (both written and verbal) and ability to present findings to a non-technical audience including external stakeholders Preferred Qualifications, Capabilities, And Skills Strong knowledge of derivative markets (Equities, FX, Rates, or Commodities) Knowledge of UI languages such as react would be an advantage and Advanced user of MS Office suite Close attention to detail and ability to work to very high standards About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world. Show more Show less

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

The D. E. Shaw group is a global investment and technology development firm with more than $65 billion in investment capital as of December 1, 2024, and offices in North America, Europe, and Asia. Since our founding in 1988, our firm has earned an international reputation for successful investing based on innovation, careful risk management, and the quality and depth of our staff. We have a significant presence in the world's capital markets, investing in a wide range of companies and financial instruments in both developed and developing economies. We are looking for resourceful and exceptional candidates for the Data Engineer role within our product development teams based out of Hyderabad. At DESIS, the Data Engineers develop Web Robots, or Web Spiders, that crawl through the web and retrieve data in the form of HTML, plain text, PDFs, Excel, and any other format that is either structured or unstructured. The job functions of the engineer also include scraping the website data into a structured format and building automated and custom reports on the downloaded data that are used as knowledge for business purposes. The team also works on automating end-to-end data pipelines. WHAT YOU'LL DO DAY-TO-DAY: As a member of the Data Engineering team, you will be responsible for various aspects of data extraction, such as understanding the data requirements of the business group, reverse-engineering the website, its technology, and the data retrieval process, re-engineering by developing web robots to automate the extraction of the data, and building monitoring systems to ensure the integrity and quality of the extracted data. You will also be responsible for managing the changes to the website's dynamics and layout to ensure clean downloads, building scraping and parsing systems to transform raw data into a structured form, and offering operations support to ensure high availability and zero data losses. Additionally, you will be involved in other tasks such as storing the extracted data in the recommended databases, building high-performing, scalable data extraction systems, and automating data pipelines. WHO WE’RE LOOKING FOR: Basic qualifications: 2-4 years of experience in website data extraction and scraping Good knowledge of relational databases, writing complex queries in SQL, and dealing with ETL operations on databases Proficiency in Python for performing operations on data Expertise in Python frameworks like Requests, UrlLib2, Selenium, Beautiful Soup, and Scrapy A good understanding of HTTP requests and responses, HTML, CSS, XML, JSON, and JavaScript Expertise with debugging tools in Chrome to reverse engineer website dynamics A good academic background and accomplishments A BCA/MCA/BS/MS degree with a good foundation and practical application of knowledge in data structures and algorithms Problem-solving and analytical skills Good debugging skills Interested candidates can apply through our website: https://www.deshawindia.com/recruit/jobs/Adv/Link/SnrMemDEFeb25 We encourage candidates with relevant experience looking to restart their careers after a break to apply for this position. Learn about Recommence, our gender-neutral return-to-work initiative. The Firm offers excellent benefits, a casual, collegial working environment, and an attractive compensation package. For further information about our recruitment process, including how applicant data will be processed, please visit https://www.deshawindia.com/careers Members of the D. E. Shaw group do not discriminate in employment matters on the basis of sex, race, colour, caste, creed, religion, pregnancy, national origin, age, military service eligibility, veteran status, sexual orientation, marital status, disability, or any other protected class. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Position: Research Analyst (Subject Matter Expert – BFSI) Location: India (Remote) Employment Type: Full-Time Schedule: Monday to Friday, Day Shift Experience: 5+ Years Company Description Scry AI is a leading innovator in AI-powered financial intelligence platforms tailored for Banking, Financial Services, and Insurance (BFSI) organizations. Our Collatio® Financial Spreading solution uses advanced AI to automate the digitization, normalization, and analysis of complex financial data from corporate filings, statements, and reports, supporting research, risk, and underwriting teams with faster, more accurate insights. We are seeking an experienced Research Analyst to join as a Subject Matter Expert (SME) and help drive the development of intelligent financial analysis tools and workflows for BFSI enterprises. Role Overview As a Research Analyst (SME) at Scry AI, you will use your expertise in financial research and corporate analysis to inform product development, support client engagements, and advance automation in financial statement analysis. You will work closely with product managers, data scientists, and client success teams to shape cutting-edge AI tools for research professionals and financial institutions. Key Responsibilities 1. Subject Matter Expertise & Product Strategy Provide domain expertise on financial analysis, research workflows, and reporting standards across BFSI sectors. Collaborate with product and engineering teams to improve AI models for extracting and interpreting data from financial statements, regulatory filings, and disclosures. Define financial metrics, ratios, and KPIs essential for research and credit evaluation. 2. Client & Market Engagement Act as a liaison between users and development teams to ensure product-market fit and solution adoption. Support solution engineering in product demos, RFPs, and pre-sales conversations with financial institutions. Conduct training sessions, pilot engagements, and feedback loops with clients. 3. Analytics Automation & Best Practices Contribute to developing automated workflows for financial spreading, trend analysis, benchmarking, and footnote parsing. Ensure output quality by validating mappings, templates, and research models generated by the platform. Recommend industry best practices for integrating AI-powered tools into traditional research workflows. 4. Thought Leadership & Competitive Research Track competing financial research platforms to identify gaps and opportunities. Co-develop content with marketing teams, including whitepapers, case studies, and knowledge sessions. Represent Scry AI at BFSI research forums and digital transformation events. Required Qualifications & Skills 5+ years of experience in financial research, credit analysis, risk research, or corporate financial analysis. In-depth understanding of financial statements, sector-specific KPIs, and fundamental analysis. Familiarity with research tools such as Capital IQ, FactSet, Bloomberg, or similar platforms. Hands-on experience with financial data modeling, spreading, or analysis automation. Strong communication and documentation skills for both business and technical audiences. Our Ideal Candidate Is a curious and analytical thinker passionate about improving financial research through AI. Has the ability to work across teams and translate real-world financial problems into product use cases. Is proactive, detail-oriented, and thrives in fast-paced environments. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... As a Data Engineer with ETL/ELT expertise for our growing data platform and analytics teams, you will understand and enable the required data sets from different sources. This includes both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements. Transforming technical design. Working on data ingestion, preparation and transformation. Developing the scripts for data sourcing and parsing. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What were looking for... Youre curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You'll Need To Have Bachelors degree or one or more years of experience. Experience with Data Warehouse concepts and Data Management life cycle. Even better if you have one or more of the following: Any related Certification on ETL/ELT developer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influencing partners. If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Hyderabad, India Chennai, India Show more Show less

Posted 3 weeks ago

Apply

4.5 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Summary This position requires technical skills and an ability to resolve complex problems while working on time boxed, multiple concurrent development projects. Understand, suggest and choose from, possible technical solutions/tools to satisfy business needs choices that reflect a solution that balances design, use, and support. Use the best practices/techniques for keeping the design/solution under control without excessive work or rework. Peer technical review including design, code review and testing. Ability to lead a technical team, when required. Thorough understanding of OO concepts & its application in development. Expertise in Core Java Hands-on with spring 3.0 + (Spring CORE, Spring MVC , Spring ORM) Hands-on with JPA Hands-on with Spring AOP, Spring Security, AspectJ, Spring Logging with Log4J , and JMS) Hands-on with JSP, Servlet, JSON Hands-on with SQL Server Hands-on with REST Services Must have worked on one application server like Tomcat, JBOSS, Apache, Glassfish, and WebSphere Knowledge of XML parsing, and/or DOM traversal. Experience in Agile/SCRUM Software Development Create logical data structures, Coding, and Apply best practices/ design patterns of coding Product / application development, and knowledge of SDLC of multiple products/applications Responsible for and produces complete, quality deliverables Adherence and compliance to Project/organization processes and standards Educational Qualification Graduate, Java/J2EE training from a good institute. Skills Goal-oriented with a results-driven desire for success Experience 4.5 - 6 Years Relevant experience - 5 Years Show more Show less

Posted 3 weeks ago

Apply

5.5 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Bold is seeking for newest addition to our existing Data Science team, you’ll combine your analytical skills and business knowledge to deliver key insights to our Product and Management teams. You’ll help drive product development using our vast user data sets to enhance our current products and to help us find inferences and relationships in this data to build new products – More towards the role required Job Description ABOUT THIS TEAM The Data Science department at BOLD is responsible for discovering patterns and trends in datasets to get insights, creating predictive algorithms and data models, improving the quality of data or product offerings by utilizing machine learning techniques, distributing suggestions to other teams and top management, and using data tools such as Python and SQL. The Data Science team actively collaborates with other vertical teams such as Engineering, Portals, BI, Product, Legal. Most of the projects are focused around problems that require a mix of natural language processing and machine learning. Some of the active projects are resume parsing, ranking, summary generation, data quality and scoring, content generation, job recommendations, and conversion analysis. Apart from the business initiatives, the team also explores state of the art methods and keeps them upto to date with technology. WHAT YOU’LL DO Demonstrate ability to work on data science projects involving NLP, large language models, predictive modelling, statistical analysis, vector space modelling, machine learning etc. Leverage our rich data sets of user data to perform research, develop models and create data products with our Development & Product teams Develop novel and scalable data systems in cooperation with our system architects that leverage datasets using machine learning techniques to enhance the user experience Collaborates effectively with cross functional teams to deliver end-to-end products & features. Demonstrates ability to multi-task and re-prioritize responsibilities based on changing requirements. Estimates efforts, identify risks, devises and meets project schedules. Runs review meetings effectively and drive the closure of all open issues on time Mentors/coaches data scientists to facilitate their development and provide technical leadership to them. Rises above detail to see broader issues and implications for whole product/team. WHAT YOU’LL NEED Knowledge and experience using statistical and machine learning algorithms including regression, instance-based learning, decision trees, Bayesian statistics, clustering, neural networks, deep learning, ensemble methods Expert knowledge in Python. Experience in using large language models. Fine-tuning knowledge is good to have. Experience in feature selection, building and optimising classifiers. Experience working with backend technologies such as Flask/ Gunicorn etc. Experience on working with open source library such Spacy,NLTK,Gensim etc Experience on working with deep-learning library such as Tensor flow, Pytorch etc Experience with software stack components including common programming languages, back-end technologies, database modelling, continuous integration, services oriented architecture, software testability etc. Be a keen learner and enthusiastic about developing software. WHAT'S GOOD TO HAVE Experience with Cloud infrastructure and platforms (Azure, AWS) Experience using tools to deploy models into the production environment such as Jenkins EXPERIENCE- Module Lead- 5.5 years+ BENEFITS Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) ADDITIONAL BENEFITS Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Are you passionate about solving complex logistics challenges that directly impact millions of customers? Our Logistics Analytics team is at the forefront of revolutionizing delivery experiences through data-driven solutions and innovative technology. As a Applied Scientist, you will join a team dedicated to optimizing our delivery network, ensuring customers receive their packages reliably and efficiently. We are seeking an enthusiastic, customer centric professional with good analytical capabilities to drive impactful projects, implement advanced scheduling solutions, and develop scalable processes. In this role, you will have immediate ownership of business-critical challenges and the opportunity to make strategic, data-driven decisions that shape the future of last-mile delivery. Your work will directly influence customer experience and operational excellence. The ideal candidate will possess both research science capabilities and program management skills, thriving in an environment that requires independent decision-making and comfort with ambiguity. This role offers the opportunity to make a significant impact on one of the world's most sophisticated logistics networks while working with pioneering technology and data science applications. Basic Qualifications 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2950397 Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies