Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Position- Lead Android Developer Location- Jaipur Khushi Baby, a nonprofit organization in India, serves as a technical partner to health departments. Established in 2016 from a Yale University classroom, it has grown into a 90-member team with offices in Jaipur, Udaipur, Delhi, and Bengaluru. Khushi Baby focuses on digital health solutions, health program strengthening, and R&D. Its flagship platform, the Community Health Integrated Platform (CHIP), supports over 70,000 community health workers across 40,000 villages, reaching 45 million beneficiaries. The platform has identified and monitored 5+ million high-risk individuals, with the Ministry of Health allocating ₹160 crore ($20M) for its scale-up. CHIP has enabled initiatives like Rajasthan's digital health census, TB case finding, vector-borne disease surveillance, labor room monitoring, and immunization drives, co-designed with extensive field input. In R&D, Khushi Baby advances community-level geospatial analysis and individual health diagnostics, including smartphone-based tools and low-literacy models. Programmatically, it focuses on maternal health, child malnutrition, and zero-dose children. Backed by donors like GAVI, Skoll Foundation, and CSR funding, Khushi Baby partners with IITs, AIIMS Jodhpur, JPAL South Asia, MIT, Microsoft Research, WHO, and multiple state governments. Khushi Baby seeks skilled, creative, and driven candidates eager to make a large-scale public health impact by joining its interdisciplinary team in policy, design, development, implementation, and data science. What we require: A willingness to put our mission first and to go to the last mile to ensure our solution is creating impact 5+ years of professional working experience in developing android applications More than 3 years of experience in leading a team of developers. Experienced in leading a team on various projects. Good exposure to Android Studio/Android SDKs with Android tools, Kotlin, and frameworks. Research and suggest new mobile products, applications and protocols. Working in close collaboration with back-end developers, designers, and the rest of the team to deliver well-architected and high-quality solutions. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Familiarity with industry-standard design patterns for most commonly encountered situations is a must A solid understanding of operating system fundamentals such as processes, inter-process communication, multi-threading primitives, race conditions and deadlocks Good knowledge of multithreading, process optimisation, and system resource planning in native Android Experience using Web Services and Data parsing using JSON, XML etc. Good knowledge of OO designs, database design, data structures and algorithms Experience working in an Agile team, familiarity with Agile best practices, and ability to manage individual task deliverables Possessing the sense of user engagement in order to deep dive for understanding the real end users' needs and to improve the product over time. Work closely with developers, backend lead, product and project managers to meet project deadlines. Notwithstanding anything contained What we prefer: Background in public health, ICT4D, and digital health standard frameworks Experience with building offline-online capable apps Experience with facial biometrics, Near Field Communication, edge analytics Development of currently live Android applications with over 1,000 downloads and 4+ rating on Playstore Projects / Responsibilities: Applications Community Health Integrated Platform for ASHAs, ANMs and MOCs Khushi Baby Reproductive and Child Health Solution Decision Support Tool for Community Health Officers Health Worker Diligence and High-Risk Prediction module in collaboration with Google AI for Social Good IoT device integration, facial biometric module integration, NFC device integration for decentralized health records, NDHM implementation Health and Wellness Center Digital Platform Ensuring end-to-end encryption, version control and backwards compatibility, automated testing, systematic documentation Conducting field tests and analyzing automated user metrics to understand and improve user interface Remuneration The remuneration offered will range between 20-25 LPA commensurate with the candidate's experience and skill sets. Other benefits include: Medical Insurance Paid sick leave, paid parental leave and menstrual leave Learning stipend policy A flexible, enabling environment workplace with the opportunity to grow into leadership roles. Opportunities to attend and actively participate in prestigious International conferences, workshops Note : The candidate will be on a probationary period for the first 90 days of the contract How to Apply To apply for the above position, To apply for the above position, share your CV on careers@khushibaby.org Due to the high number of applicants, we will only reach out to those who are shortlisted. Rest assured that your application will be carefully reviewed, and if you are shortlisted, you will receive a call or mail from us.
Posted 3 days ago
1.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Location: Indore/Chennai Job Type: Full-time Experience Required: 1+ years Department: Technology / Mobile Development Job Summary : We are seeking a passionate and skilled Swift Developer with 1+ years of experience in iOS application development. The ideal candidate should have hands-on experience building mobile applications using Swift and a strong understanding of Apple’s ecosystem. You will collaborate with cross-functional teams to develop, test, and deliver high-performance iOS applications. Key Responsibilities: Develop and maintain advanced iOS applications using Swift. Collaborate with UX/UI designers, product managers, and backend developers. Integrate APIs and third-party services into applications. Write clean, scalable, and well-documented code. Debug and resolve issues, improve performance, and stability. Keep up to date with the latest iOS trends, technologies, and best practices. Participate in code reviews and contribute to technical discussions. Requirements: 1+ years of experience in Swift and iOS app development. Strong knowledge of Xcode, UIKit, CoreData, and other iOS frameworks. Familiarity with RESTful APIs, JSON parsing, and third-party libraries. Good understanding of mobile UI/UX standards. Experience with version control systems like Git. Ability to write unit and UI tests to ensure robustness. Strong analytical and problem-solving skills. Bachelor’s degree in Computer Science, Engineering, or a related field.
Posted 3 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: Provide Level 2 (L2) support for 5G Core SA network functions in production environment. Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). Validate EDR formats and schemas against 3GPP and Nokia specifications. NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. Analyze alarms from NetAct/Mantaray, or external monitoring tools. Correlate events using Netscout, Mantaray, and PM/CM data. Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. Perform root cause analysis (RCA) and implement corrective actions. Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration Automate deployment, scaling, healing, and termination of network functions using NCOM. Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: Trace subscriber issues (5G attach, PDU session, QoS). Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). Correlate user-plane drops, abnormal release, bearer QoS mismatch. Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. Maintain detailed documentation of network configurations, incident reports, and operational procedures. Support software upgrades, patch management, and configuration changes. Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). Audit NRF/PCF/UDM etc configuration & Database. Validate policy rules, slicing parameters, and DNN/APN settings. Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. Proactively detect degrading KPIs trends. 6. Security & Access Support: Application support for Nokia EDR and CrowdStrike. Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. Work with L3 and care team for issue resolution. Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: 5–9 years of experience in Telecom industry with hands on experience. Mandatory experience with Nokia 5G Core-SA platform. Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. Strong analytical and troubleshooting skills. Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). Knowledge of network protocols and security (TLS, IPsec). Excellent communication and documentation skills. Educational Qualification: BE / BTech 15 Years Full Time Education Additional Information: Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). Cloud Certifications (AWS)/ Experience on AWS Cloud
Posted 3 days ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
7.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Role: -Data Quality(DQ) Specialist Experience: 7-12 years of relevant professional experience in Data Quality and/or Data Governance, Data Management, Data Lineage solution implementation. Location: Pune Notice-Immediate/Max 15 Days Work Mode- 5 days WFO JOB RESPONSIBILITIES: The job entails you to work with our clients and partners to design, define, implement, roll-out, and improve Data Quality that leverage various tools available in the market for example: Informatica IDQ or SAP DQ or SAP MDG or Collibra DQ or Talend DQ or Custom DQ Solution and/or other leading platform for the client’s business benefit. The ideal candidate will be responsible for ensuring the accuracy, completeness, consistency, and reliability of data across systems. You will work closely with data engineers, analysts, and business stakeholders to define and implement data quality frameworks and tools. As part of your role and responsibilities, you will get the opportunity to be involved in the entire business development life-cycle: Meet with business individuals to gather information and analyze existing business processes, determine and document gaps and areas for improvement, prepare requirements documents, functional design documents, etc. To summarize, work with the project stakeholders to identify business needs and gather requirements for the following areas: Data Quality and/or Data Governance or Master Data Follow up of the implementation by conducting training sessions, planning and executing technical and functional transition to support team. Ability to grasp business and technical concepts and transform them into creative, lean, and smart data management solutions. Development and implementation of Data Quality solution in any of the above leading platform-based Enterprise Data Management Solutions Assess and improve data quality across multiple systems and domains. Define and implement data quality rules, metrics, and dashboards. Perform data profiling, cleansing, and validation using industry-standard tools. Collaborate with data stewards and business units to resolve data issues. Develop and maintain data quality documentation and standards. Support data governance initiatives and master data management (MDM). Recommend and implement data quality tools and automation strategies. Conduct root cause analysis of data quality issues and propose remediation plans. Implement/Take advantage of AI to improve/automate Data Quality solution Leveraging SAP MDG/ECCs experience the candidate is able to deep dive to do root cause analysis for assigned usecases. Also able to work with Azure data lake (via dataBricks) using SQL/Python. Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to monitor on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. Identifies financial impacts of Data Quality Issue. Also is able to identify business benefit (quantitative/qualitative) from a remediation standpoint along with managing implementation timelines. Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs Identifies business DQ rules basis which KPIs/Measures are stood up that feed into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality KPIs/Measures is needed. Also has experience owing and executing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives Highlights risk/hidden DQ issues to Lead/Manager for further guidance/escalation Communication skills are important in this role as this is outward facing and focus has to be on clearly articulation messages. Support designing, building and deployment of data quality dashboards via PowerBI Determines escalation paths and constructs workflow and alerts which notify process and data owners of unresolved data quality issues Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) Works with business functions and projects to create data quality improvement plans Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made Supports initiatives which are driving data clean-up of existing data landscape JOB REQUIREMENTS: i. Education or Certifications: Bachelor's / Master's degree in engineering/technology/other related degrees. Relevant Professional level certifications from Informatica or SAP or Collibra or Talend or any other leading platform/tools Relevant certifications from DAMA, EDM Council and CMMI-DMM will be a bonus ii. Work Experience: You have 4-10 years of relevant experience within the Data & Analytics area with major experience around data management areas: ideally in Data Quality (DQ) and/or Data Governance or Master Data using relevant tools You have an in-depth knowledge of Data Quality and Data Governance concepts, approaches, methodologies and tools Client-facing Consulting experience will be considered a plus iii. Technical and Functional Skills: Hands-on experience in any of the above DQ tools in the area of enterprise Data Management preferably in complex and diverse systems environments Exposure to concepts of data quality – data lifecycle, data profiling, data quality remediation(cleansing, parsing, standardization, enrichment using 3rd party plugins etc.) etc. Strong understanding of data quality best practices, concepts, data quality management frameworks and data quality dimensions/KPIs Deep knowledge on SQL and stored procedure Should have strong knowledge on Master Data, Data Governance, Data Security Prefer to have domain knowledge on SAP Finance modules Good to have hands on experience on AI use cases on Data Quality or Data Management areas Prefer to have the concepts and hands on experience of master data management – matching, merging, creation of golden records for master data entities Strong soft skills like inter-personal, team and communication skills (both verbal and written)
Posted 3 days ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The role of Data Governance Manager is a first level leadership position within the Finance Data Office’s Data Management team. This is a pivotal role for setting up and driving the data governance framework, Data related principles and policies to create a culture of data accountability across all finance data domains. The role is responsible for leading and executing the data governance agenda including data definition, data ownership, data standards, data remediation and master data governance processes across Finance. Data governance manager is expected to partner with the wider data management team in improvement of data quality by implementing data monitoring solutions. The ideal candidate will have a proven track record in working with data governance platforms such as Alation or Collibra for SAP master data domains. This position will take accountability for defining and driving data governance aspects, including leading meetings and data governance forums with Data Stewards, Data Owners, Data Engineers, and other key stakeholders. Coordinating with Data Owners to enable identification of Critical data elements for SAP master Data – Supplier/Finance/Bank master. Develop and maintain a business-facing data glossary and data catalog for SAP master data (Supplier, Customer, Finance (GL, Cost Center, Profit Center etc), capturing data definitions, lineage, and usage for relevant SAP master Data Define Data governance framework: Develop and implement data governance policies, standards, and processes to ensure data quality, data management, and compliance for relevant SAP Master Data (Finance, Supplier and Customer Master Data) Conduct data quality assessments and implement corrective actions to address data quality issues. Collaborate with cross-functional teams to ensure data governance practices are integrated into all SAP relevant business processes. Data Cataloging and Lineage: Manage data cataloging and lineage to provide visibility into data assets, their origins, and transformations in SAP environment Facilitate governance forums, data domain councils, and change advisory boards to review data issues, standards, and continuous improvements. Responsible to prepare data documentation, including data models, process flows, governance policies, and stewardship responsibilities. Collaboration: Work closely with IT, data management teams, and business units to implement data governance best practices and tools. Monitoring and Reporting: Monitor data governance activities, measure progress, and report on key metrics to senior management. Training and Awareness: Conduct training sessions and create awareness programs to promote data governance within the organization. Data structures and models: Demonstrate deep understanding of SAP (and other ERP system such as JD Edwards etc.) master data structures such as Vendor, Customer, Cost center, Profit Center, GL Accounts etc. Data Policies: Collaborate and coordinate with respective pillar lead’s to ensure necessary policies related to data privacy, data lifecycle management and data quality management are being developed JOB REQUIREMENTS: i. Education or Certifications: Bachelor's / Master's degree in engineering/technology/other related degrees. Relevant Professional level certifications from Informatica or SAP or Collibra or Alation or any other leading platform/tools Relevant certifications from DAMA, EDM Council and CMMI-DMM will be a bonus ii. Work Experience: You have 4-10 years of relevant experience within the Data & Analytics area with major experience around data management areas: ideally in Data Governance (DQ) and/or Data Quality or Master Data or Data Lineage using relevant tools like Informatica or SAP MDG or Collibra or Alation or any other market leading tools. You have an in-depth knowledge of Data Quality and Data Governance concepts, approaches, methodologies and tools Client-facing Consulting experience will be considered a plus iii. Technical and Functional Skills: Hands-on experience in any of the above tools in the area of Enterprise Data Governance preferably in SAP or complex and diverse systems environments Experience of implementing data governance in SAP environment both transactional and master data Expert knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Strong knowledge on SAP peripheral systems and good understanding of Upstream and downstream impact of master Data Exposure to concepts of data quality – data lifecycle, data profiling, data quality remediation(cleansing, parsing, standardization, enrichment using 3 rd party plugins etc.) etc. Strong understanding of data quality best practices, concepts, data quality management frameworks and data quality dimensions/KPIs Deep knowledge on SQL and stored procedure Should have strong knowledge on Master Data, Data Security Prefer to have domain knowledge on SAP Finance modules Good to have hands on experience on AI use cases on Data Quality or Data Governance or other Management areas Prefer to have the concepts and hands on experience of master data management – matching, merging, creation of golden records for master data entities Strong soft skills like inter-personal, team and communication skills (both verbal and written) Prefer to have - Project management, Domain knowledge [Procurement, Finance, Customer], Business Acumen, Critical thinking, Story telling
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm with a team of over 125,000 professionals in more than 30 countries. Driven by curiosity, agility, and the desire to create lasting value for clients, we serve leading enterprises worldwide, including the Fortune Global 500. Our purpose is the relentless pursuit of a world that works better for people, and we achieve this through our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant, Research Data Scientist. We are looking for candidates with relevant experience in Text Mining/Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. The ideal candidate should have full cycle experience in at least one large-scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, technology & analytic solutioning, implementation, and change management. Experience in Hadoop, including development in the map-reduce framework, is also desirable. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems, and working with database teams to deliver large-scale text analytic solutions. Responsibilities: - Develop transformative AI/ML solutions to address clients" business requirements - Manage project delivery involving data pre-processing, model training and evaluation, and parameter tuning - Manage stakeholder/customer expectations and project documentation - Research cutting-edge developments in AI/ML with NLP/NLU applications in various industries - Design and develop solution algorithms within tight timelines - Interact with clients to collect and synthesize requirements for effective analytics/text mining roadmap - Work with digital development teams to integrate algorithms into production applications - Conduct applied research on text analytics and machine learning projects, file patents, and publish papers Qualifications: Minimum Qualifications/Skills: - MS in Computer Science, Information Systems, or Computer Engineering - Relevant experience in Text Mining/Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms Technology: - Open Source Text Mining paradigms (NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene) and cloud-based NLU tools (DialogFlow, MS LUIS) - Statistical Toolkits (R, Weka, S-Plus, Matlab, SAS-Text Miner) - Strong Core Java experience, programming in the Hadoop ecosystem, and distributed computing concepts - Proficiency in Python/R programming; Java programming skills are a plus Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG, with text analytics experience on large structured and unstructured data - Knowledge of AI Methodologies (ML, DL, NLP, Neural Networks, Information Retrieval, NLG, NLU) - Familiarity with Natural Language Processing & Statistics concepts, especially in their application - Ability to conduct client research to enhance analytics agenda Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, and Machine learning/Deep learning methods - UI development paradigms for Text Mining Insights Visualization - Experience with Linux, Windows, GPU, Spark, Scala, and deep learning frameworks Methodology: - Social Network modeling paradigms, tools & techniques - Text Analytics using NLP tools like Support Vector Machines and Social Network Analysis - Previous experience with Text analytics implementations using open source packages or SAS-Text Miner - Strong prioritization, consultative mindset, and time management skills Job Details: - Job Title: Principal Consultant - Primary Location: India-Gurugram - Schedule: Full-time - Education Level: Master's/Equivalent - Job Posting Date: Oct 4, 2024, 12:27:03 PM - Unposting Date: Ongoing - Master Skills List: Digital - Job Category: Full Time,
Posted 3 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Skills & Expertise Angular (v8.0+) & TypeScript : 8+ years of hands-on development experience REST API Integration : Proficient in building and consuming APIs UI Frameworks : Expertise in Material Design and Bootstrap Template to Screen Conversion : Skilled at transforming UI/UX designs into functional screens Version Control Tools : Experience with GitHub, TeamCity Testing : Development of JUnit / MUnit test cases Ticketing Tools : Familiar with JIRA, ServiceNow Server Knowledge : Working knowledge of Apache Tomcat Frontend Technologies : Proficient in HTML, CSS Backend & DB Basics : Basic Java skills and experience with databases like MySQL, MS-SQL, Oracle Agile & Waterfall Methodologies : Experience in both project management styles Communication : Excellent verbal and written communication skills JSON Handling : Competent in parsing and managing JSON data Key Responsibilities Development & UI/UX Implementation Lead the design, development, and maintenance of robust, responsive, and user-friendly web applications using Angular (v8.0+) and TypeScript. Demonstrate strong proficiency in transforming raw UI/UX designs and templates into functional, pixel-perfect screens. Leverage expertise in Material Design and Bootstrap to build visually appealing and consistent user interfaces. API Integration & Data Handling Proficiently build and consume REST APIs, ensuring seamless data flow between frontend and backend systems. Competently parse and manage JSON data for effective client-side operations. Frontend & Backend Foundations Show strong proficiency in fundamental frontend technologies : HTML and CSS. Apply basic Java skills and experience with databases like MySQL, MS-SQL, and Oracle to understand full-stack interactions. Testing & Quality Assurance Develop JUnit / MUnit test cases to ensure code quality, reliability, and maintainability. Tools & Methodologies Utilize Version Control Tools such as GitHub and TeamCity for collaborative development. Work with Ticketing Tools like JIRA and ServiceNow for project tracking and issue management. Possess working knowledge of Apache Tomcat for deployment understanding. Operate effectively within both Agile and Waterfall Methodologies (ref:hirist.tech)
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in more than 30 countries. We are driven by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients, including Fortune Global 500 companies. Our purpose is the relentless pursuit of a world that works better for people, and we serve leading enterprises with deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Senior Principal Consultant, Research Data Scientist. The ideal candidate should have experience in Text Mining, Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. It is desirable to have full-cycle experience in at least one Large Scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation, and Change Management. Experience in Hadoop, including development in the map-reduce framework, is also required. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems and working with database teams to deliver large-scale text analytic solutions. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools. Responsibilities include developing transformative AI/ML solutions, managing project delivery, stakeholder/customer expectations, project documentation, project planning, and staying updated on industrial and academic developments in AI/ML with NLP/NLU applications. The role also involves conceptualizing, designing, building, and developing solution algorithms, interacting with clients to collect requirements, and conducting applied research on text analytics and machine learning projects. Qualifications we seek: Minimum Qualifications/Skills: - MS in Computer Science, Information systems, or Computer engineering - Systems Engineering experience with Text Mining/NLP tools, Data sciences, Big Data, and algorithms Technology: - Proficiency in Open Source Text Mining paradigms like NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud-based NLU tools such as DialogFlow, MS LUIS - Exposure to Statistical Toolkits like R, Weka, S-Plus, Matlab, SAS-Text Miner - Strong Core Java experience, Hadoop ecosystem, Python/R programming skills Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG - Solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks - Understanding of NLP & Statistics concepts, applications like Sentiment Analysis, NLP, etc. Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, Machine learning/Deep learning methods - UI development paradigms, Linux, Windows, GPU Experience, Spark, Scala - Deep learning frameworks like TensorFlow, Keras, Torch, Theano Methodology: - Social Network modeling paradigms - Text Analytics using NLP tools, Text analytics implementations This is a full-time position based in India-Noida. The candidate should have a Master's degree or equivalent education level. The job posting was on Oct 7, 2024, and the unposting date is ongoing. The primary skills required are digital, and the job category is full-time.,
Posted 4 days ago
3.0 years
2 - 4 Lacs
Bengaluru
On-site
- 3+ years of building models for business application experience - Experience in patents or publications at top-tier peer-reviewed conferences or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing - Knowledge of standard speech and machine learning techniques The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain.
Posted 4 days ago
0.0 - 1.0 years
8 - 14 Lacs
Hyderabad, Telangana
On-site
Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain. Job Type: Full-time Pay: ₹800,000.00 - ₹1,400,000.00 per year Benefits: Provident Fund Schedule: Day shift Monday to Friday Application Question(s): Are you an immediate Joiner? Experience: Python : 2 years (Required) AI/ML: 2 years (Required) NLP: 1 year (Required) Location: Hyderabad, Telangana (Required) Work Location: In person
Posted 4 days ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Team The Credit Strategy team at Navi is responsible for developing and optimizing underwriting strategies across our key lending products. The team owns and drives key underwriting metrics, asset quality indicators and portfolio monitoring while ensuring that credit decisions align with business objectives and risk tolerance of the Company. This team focuses on continually enhancing underwriting quality and portfolio health to support sustainable growth. About the Role This role offers an opportunity to be an integral part of the team that is scaling up the Personal and/or Housing Loans business at Navi. It involves owning end-to-end credit policies from creation to implementation for different customer segments, portfolio management and monitoring credit metrics. You’ll have the opportunity to apply cutting-edge techniques to real-world challenges, while collaborating closely with cross-functional teams such as product, analytics, business and data science to deliver measurable business impact. This isn’t just a role - it’s a chance to contribute to the future of fintech through innovative, high-ownership work that makes a visible difference. Must Haves Highly analytical and has the ability to find patterns in data and analyze potential impact against key credit risk metrics and business drivers Ability to work in a fast-paced environment and be a self-starter Takes initiative and can think of new approaches to problem-solving Work in a dynamic environment of business, structure problems, define and track actionables Excellent verbal & written communication skills, as well as presentation skills Working knowledge of SQL, Excel, Tableau. Python would be a plus Graduation from top IITs/BITS, with 3-6 years of experience (preferred but not mandatory) / MBA from a top 4 B-school with up to 5 years of experience What We Expect From You Be part of and develop a high impact team fostering a culture of learning and growth mindset Drive development of risk based credit strategies and amount strategies to maximize approvals within specific segments while also minimizing credit risk; own portfolio risk metrics - Bounces, PAR metrics, Roll rates etc Monitor portfolio risk from granular dimensions and constantly implement strategies to maintain risk metrics within specific ranges. Monitor various operational metrics and develop alerting mechanisms to maintain process efficiency Maintain high level of collaboration with Navi’s Data Science (DS) team in developing extensive range of credit underwriting models for the entire lifecycle - from conceptualization to deployment, model validation and optimization Working towards continuous improvement (through testing and calibration) of DS models ranging across underwriting, parsing, income assessment etc Innovate and experiment with various new data sources for underwriting Work on identification of emerging credit risks across portfolio, and drive key initiatives to help achieve objectives of credit risk mitigation Collaborate with several stakeholder functions, such as Business, Analytics, Tech, Product, Collections to achieve these outcomes The approach to this role will involve: Reviewing credit underwriting outcomes across various cuts - borrower level, segment level, parameter level, etc to gather credit insights and make necessary policy modifications, Identifying policy implementation gaps and making necessary improvements. Evaluate data sources - including alternate data sources for digital underwriting of personal / housing loans Objective assessments to verify outcomes driven by credit underwriting strategies and drive continuous improvement Own the recommendations made from this process, and action items linked for appropriate conclusions Streamlining processes to manage risks and enhance efficiencies Inside Navi We are shaping the future of financial services for a billion Indians through products that are simple, accessible, and affordable. From Personal & Home Loans to UPI, Insurance, Mutual Funds, and Gold — we’re building tech-first solutions that work at scale, with a strong customer-first approach. Founded by Sachin Bansal & Ankit Agarwal in 2018, we are one of India’s fastest-growing financial services organisations. But we’re just getting started! Our Culture The Navi DNA Ambition. Perseverance. Self-awareness. Ownership. Integrity. We’re looking for people who dream big when it comes to innovation. At Navi, you’ll be empowered with the right mechanisms to work in a dynamic team that builds and improves innovative solutions. If you’re driven to deliver real value to customers, no matter the challenge, this is the place for you. We chase excellence by uplifting each other—and that starts with every one of us. Why You'll Thrive at Navi At Navi, it’s about how you think, build, and grow. You’ll thrive here if: ● You’re impact-driven You take ownership, build boldly, and care about making a real difference. ● You strive for excellence Good isn’t good enough. You bring focus, precision, and a passion for quality. ● You embrace change You adapt quickly, move fast, and always put the customer first.
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer - Pune and Hyderabad • Preferably BE/B.Tech / MCA / M.Sc. with minimum 4+years of experience in data engineering • Comprehensive understanding and ability to apply data engineering techniques, from event streaming and real-time analytics to computational grids and graph processing engines • Curious to learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind • Strong command of at least one language among Python, Java, Golang • Understanding of data management and database technologies including SQL/NoSQL • Understanding of data products, data structures and data manipulation techniques including classification, parsing, pattern matching • Experience with Databricks, ADLS, Delta Lake/Tables, ETL tools would be an asset • Good understanding of engineering practices and software development lifecycle • Enthusiastic, self-motivated and client-focused • Strong communicator, from making presentations to technical writing · Transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques · Design, model, develop, and improve data pipelines and data products · Engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively · Develop, train, and apply machine-learning models to make better predictions, automate manual processes, and solve challenging business problems · Ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. · Build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues · Understand, represent, and advocate for client needs.
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: Provide Level 2 (L2) support for 5G Core SA network functions in production environment. Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). Validate EDR formats and schemas against 3GPP and Nokia specifications. NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. Analyze alarms from NetAct/Mantaray, or external monitoring tools. Correlate events using Netscout, Mantaray, and PM/CM data. Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. Perform root cause analysis (RCA) and implement corrective actions. Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration Automate deployment, scaling, healing, and termination of network functions using NCOM. Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: Trace subscriber issues (5G attach, PDU session, QoS). Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). Correlate user-plane drops, abnormal release, bearer QoS mismatch. Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. Maintain detailed documentation of network configurations, incident reports, and operational procedures. Support software upgrades, patch management, and configuration changes. Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). Audit NRF/PCF/UDM etc configuration & Database. Validate policy rules, slicing parameters, and DNN/APN settings. Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. Proactively detect degrading KPIs trends. 6. Security & Access Support: Application support for Nokia EDR and CrowdStrike. Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. Work with L3 and care team for issue resolution. Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: 5–9 years of experience in Telecom industry with hands on experience. Mandatory experience with Nokia 5G Core-SA platform. Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. Strong analytical and troubleshooting skills. Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). Knowledge of network protocols and security (TLS, IPsec). Excellent communication and documentation skills. Educational Qualification: BE / BTech 15 Years Full Time Education Additional Information: Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). Cloud Certifications (AWS)/ Experience on AWS Cloud, 15 years full time education
Posted 4 days ago
2.0 years
6 Lacs
Thiruvananthapuram
On-site
2 - 3 Years 1 Opening Trivandrum Role description Overview: We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities: Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications: Bachelor’s degree in Computer Science, Information Security, or related field. 3+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills: SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Skills Siem,Splunk,Troubleshooting Skills Siem,Splunk,Troubleshooting About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 5 days ago
3.0 - 10.0 years
0 Lacs
Chennai
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. CMSTDR Senior (TechOps) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 10 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Staff (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Assist in remote and on-site gap assessment of the SIEM solution. Work on defined evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Assist in interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Asist in evaluating SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure their log sources (in-scope) to be integrated to the SIEM Experience in SIEM content development which includes : Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 years’ experience in Splunk and 3 to 5 years of overall experience with knowledge in Operating System and basic network technologies Experience in SOC as L1/L2 Analyst will be an added advantage Strong oral, written and listening skills are an essential component to effective consulting. Good to have knowledge of Vulnerability Management, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting Certification in any other SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline (CEH, Security+, etc.) will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Lead the planning, design, and execution of Applied GenAI projects, aligning them with business objectives and technical requirements. Collaborate with cross-functional teams including engineers, data scientists, and business stakeholders to deliver impactful GenAI solutions. Provide technical leadership and mentorship, fostering a culture of innovation and continuous improvement. Conduct thorough assessments of client needs to design tailored GenAI strategies addressing specific business challenges. Configure GenAI models using prompt engineering, keyword tuning, rules, preferences, and weightages for customer-specific datasets. Oversee deployment and integration of GenAI models into production environments, ensuring scalability, reliability, and performance. Demonstrate strong troubleshooting abilities using tools such as SQL, Kibana Logs, and Azure AppInsights. Monitor solution performance and provide data-driven recommendations for enhancements and optimization. Stay current with the latest GenAI advancements, incorporating best practices into implementation. Prepare and present reports, documentation, and demos to clients and senior leadership, showcasing progress and insights. Conduct GenAI proof-of-concepts and demonstrations exploring the art of the possible in AI applications. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Extensive experience in AI/ML with a focus on Generative AI technologies. Proven success in leading and delivering complex AI projects. Strong understanding of GenAI frameworks, tools, and methodologies. Excellent problem-solving, strategic thinking, and project management skills. Exceptional communication and collaboration abilities with diverse teams and stakeholders. Experience with cloud platforms and AI deployments in cloud environments (Azure preferred). Preferred Skills Hands-on experience with GenAI-based products and prompt engineering. Proficiency in text analytics/NLP, including machine learning techniques and algorithms. Skilled in text mining, parsing, and classification using state-of-the-art techniques. Expertise in the Microsoft technology stack. Good knowledge of client-side scripting: JavaScript and jQuery. Understanding of ethical AI principles and best practices in implementation. Proficiency in Python, R, Java, and Regex expressions. Experience using requirement management tools like TFS. Strong verbal and written communication skills with the ability to influence peers and leadership. About Us Icertis is the global leader in AI-powered contract intelligence. The Icertis platform revolutionizes contract management, equipping customers with powerful insights and automation to grow revenue, control costs, mitigate risk, and ensure compliance - the pillars of business success. Today, more than one third of the Fortune 100 trust Icertis to realize the full intent of millions of commercial agreements in 90+ countries. About The Team Who we a re: Icertis is the only contract intelligence platform companies trust to keep them out in front, now and in the future. Our unwavering commitment to contract intelligence is grounded in our FORTE values—Fairness, Openness, Respect, Teamwork and Execution—which guide all our interactions with employees, customers, partners, and stakeholders. Because in our mission to be the contract intelligence platform of the world, we believe how we get there is as important as the destination. Icertis, Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Icertis, Inc. complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to careers@icertis.com or get in touch with your recruiter.
Posted 5 days ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Overview: Role description We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities: Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications: Bachelor’s degree in Computer Science, Information Security, or related field. 3+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills: SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Skills Siem,Splunk,Troubleshooting Skills Siem,Splunk,Troubleshooting
Posted 5 days ago
15.0 years
0 Lacs
Gurgaon
On-site
Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: • Provide Level 2 (L2) support for 5G Core SA network functions in production environment. • Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). • Validate EDR formats and schemas against 3GPP and Nokia specifications. • NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. • Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. • Analyze alarms from NetAct/Mantaray, or external monitoring tools. • Correlate events using Netscout, Mantaray, and PM/CM data. • Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. • Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). • Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. • Perform root cause analysis (RCA) and implement corrective actions. • Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration • Automate deployment, scaling, healing, and termination of network functions using NCOM. • Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). • Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: • Trace subscriber issues (5G attach, PDU session, QoS). • Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). • Correlate user-plane drops, abnormal release, bearer QoS mismatch. • Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: • Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. • Maintain detailed documentation of network configurations, incident reports, and operational procedures. • Support software upgrades, patch management, and configuration changes. • Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). • Audit NRF/PCF/UDM etc configuration & Database. • Validate policy rules, slicing parameters, and DNN/APN settings. • Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: • Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. • Proactively detect degrading KPIs trends. 6. Security & Access Support: • Application support for Nokia EDR and CrowdStrike. • Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: • Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. • Work with L3 and care team for issue resolution. • Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting • Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: • 5–9 years of experience in Telecom industry with hands on experience. • Mandatory experience with Nokia 5G Core-SA platform. • Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform • NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. • Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. • Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. • In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. • Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). • Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. • Strong analytical and troubleshooting skills. • Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). • Knowledge of network protocols and security (TLS, IPsec). • Excellent communication and documentation skills. Educational Qualification: • BE / BTech • 15 Years Full Time Education Additional Information: • Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). • Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). • Cloud Certifications (AWS)/ Experience on AWS Cloud 15 years full time education
Posted 6 days ago
8.0 years
0 Lacs
Andhra Pradesh
On-site
8+ years of professional experience in Java development Strong knowledge of core Java, Spring Boot, and RESTful API development Hands-on experience with CI/CD pipelines, Git, and build tools like Maven or Gradle Experience in developing and deploying microservices Proven experience building parsers, custom or with frameworks like ANTLR, JavaCC, JFlex, JSoup, or similar Strong knowledge of Spring Boot, REST APIs, and backend frameworks Hands-on experience with JSON, XML, and regular expressions for parsing and transformation Exposure to event-driven architectures (Kafka, RabbitMQ) Familiarity with testing frameworks like JUnit, Mockito Understanding of Agile methodologies and DevOps practices About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The D. E. Shaw group is a global investment and technology development firm with more than $65 billion in investment capital as of December 1, 2024, and offices in North America, Europe, and Asia. Since our founding in 1988, our firm has earned an international reputation for successful investing based on innovation, careful risk management, and the quality and depth of our staff. We have a significant presence in the world's capital markets, investing in a wide range of companies and financial instruments in both developed and developing economies. We are looking for resourceful and exceptional candidates for the Data Engineer role within our product development teams based out of Hyderabad. At DESIS, the Data Engineers develop Web Robots, or Web Spiders, that crawl through the web and retrieve data in the form of HTML, plain text, PDFs, Excel, and any other format that is either structured or unstructured. The job functions of the engineer also include scraping the website data into a structured format and building automated and custom reports on the downloaded data that are used as knowledge for business purposes. The team also works on automating end-to-end data pipelines. WHAT YOU'LL DO DAY-TO-DAY: As a member of the Data Engineering team, you will be responsible for various aspects of data extraction, such as understanding the data requirements of the business group, reverse-engineering the website, its technology, and the data retrieval process, re-engineering by developing web robots to automate the extraction of the data, and building monitoring systems to ensure the integrity and quality of the extracted data. You will also be responsible for managing the changes to the website's dynamics and layout to ensure clean downloads, building scraping and parsing systems to transform raw data into a structured form, and offering operations support to ensure high availability and zero data losses. Additionally, you will be involved in other tasks such as storing the extracted data in the recommended databases, building high-performing, scalable data extraction systems, and automating data pipelines. WHO WE’RE LOOKING FOR: Basic qualifications: 2-4 years of experience in website data extraction and scraping Good knowledge of relational databases, writing complex queries in SQL, and dealing with ETL operations on databases Proficiency in Python for performing operations on data Expertise in Python frameworks like Requests, UrlLib2, Selenium, Beautiful Soup, and Scrapy A good understanding of HTTP requests and responses, HTML, CSS, XML, JSON, and JavaScript Expertise with debugging tools in Chrome to reverse engineer website dynamics A good academic background and accomplishments A BCA/MCA/BS/MS degree with a good foundation and practical application of knowledge in data structures and algorithms Problem-solving and analytical skills Good debugging skills Interested candidates can apply through our website: https://www.deshawindia.com/recruit/jobs/Adv/Link/SnrMemDEFeb25 We encourage candidates with relevant experience looking to restart their careers after a break to apply for this position. Learn about Recommence, our gender-neutral return-to-work initiative. The Firm offers excellent benefits, a casual, collegial working environment, and an attractive compensation package. For further information about our recruitment process, including how applicant data will be processed, please visit https://www.deshawindia.com/careers Members of the D. E. Shaw group do not discriminate in employment matters on the basis of sex, race, colour, caste, creed, religion, pregnancy, national origin, age, military service eligibility, veteran status, sexual orientation, marital status, disability, or any other protected class.
Posted 6 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer II Role: Full Stack Developer (TypeScript, Next.js, SQL) About Trimble AECO Trimble AECO’s Viewpoint solutions empower contractors to optimize construction project management, leveraging data to reduce risk and enhance profitability. Our cloud-based innovations bridge critical business functions—accounting, project management, and field operations—delivering scalable, intuitive solutions for organizations of all sizes. By integrating cutting-edge technology, we drive efficiency, accountability, and better project outcomes. Who Are We Looking For? We seek a skilled Full Stack Developer with 3+ years of experience in TypeScript/JavaScript , modern web frameworks, and cloud-native development. You’ll design, build, and deploy high-performance software for the construction industry, collaborating with cross-functional teams to deliver scalable solutions. A strong focus on clean code, unit testing, and best practices is essential. Key Responsibilities Develop and maintain full-stack applications using TypeScript, Next.js, and SQL(Mandatory) Write unit tests and ensure code reliability. Collaborate with product teams to translate business needs into technical solutions. Optimize backend services, APIs, and database performance. Mentor junior developers and promote best practices. Troubleshoot issues and provide technical guidance. Stay updated with emerging technologies and industry trends. Required Skills And Qualifications 3 to 5 years of professional experience in full-stack development. Strong expertise in TypeScript/JavaScript and modern frameworks. Experience with unit testing (Jest, Mocha, etc.). Next.js (or similar React-based frameworks) - High proficiency in building complete applications with Next.js, including its capabilities for UI (pages, components), rendering (SSR/SSG), and API development (API Routes). Node.js: Deep understanding of the Node.js runtime environment, its asynchronous nature, and core APIs. MySQL: Good proficiency in designing database schemas and writing complex, optimized SQL queries directly from a server-side environment. TypeScript: Good proficiency in using TypeScript across the full stack to build robust, scalable, and maintainable applications Familiarity with RESTful APIs and microservices architecture. Good To Have Jest (or other testing framework): Experience writing effective and thorough automated tests for JavaScript/TypeScript code. AWS (Amazon Web Services): Familiarity with core AWS services (e.g., EC2, S3, RDS, Lambda) and modern cloud deployment strategies. JSON: Deep proficiency in structuring, parsing, and manipulating JSON data for APIs and data exchange. CI/CD: Experience with setting up and maintaining continuous integration and deployment pipelines. Soft Skills Strong problem-solving and communication skills. Passion for learning and adopting new technologies. Education BE/BTech/ME/MTech/M.S. in Computer Science or related field (or equivalent experience).
Posted 6 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role We are looking for a Senior Data Scientist to lead the design and development of intelligent, data-driven systems that power talent discovery, candidate-job matching, and workforce insights. In this role, you'll build and deploy models that process unstructured data at scale, extract actionable insights, and deliver real-time recommendations to enhance decision-making across the talent lifecycle. If you're passionate about building smart systems that solve real-world problems in the intersection of data, people, and technology this is your calling. Key Responsibilities Build and optimize machine learning models for use cases like candidate-job matching, resume parsing, talent ranking, recommendation systems, and profile enrichment. Apply NLP techniques to extract and analyze insights from large volumes of unstructured data (resumes, job descriptions, interviews, etc.). Lead experimentation efforts including A/B testing, model comparisons, and business metric evaluation. Collaborate with product, engineering, and data teams to productionize ML pipelines and ensure model reliability at scale. Develop interpretable models and frameworks that deliver explainable AI (XAI) capabilities. Identify and evaluate data sources (internal & external) for enhancing model performance and domain coverage. Design and maintain systems to monitor model drift, retraining workflows, and feature performance in production. Must-Have Skills 5+ years of hands-on experience in data science or machine learning roles, ideally in search, recommendation, or NLP-focused systems. Strong proficiency in Python, including packages like scikit-learn, spaCy, transformers, pandas, NumPy, TensorFlow, or PyTorch. Experience with natural language processing (NLP), including named entity recognition, embeddings, text classification, and semantic search. Solid experience working with structured and unstructured data, and building scalable data pipelines. Strong command over SQL and experience with distributed data systems (e.g., Snowflake, BigQuery). Demonstrated experience taking models from experimentation to production, with clear metrics and monitoring in place. Excellent problem-solving and communication skills with the ability to work Skills : Knowledge of recommendation engines, graph-based models, or search ranking algorithms. Exposure to cloud platforms like AWS, Azure, or GCP, and MLOps workflows (CI/CD for models). Experience in talent tech, HR tech, or recruitment intelligence platforms is a strong plus. Familiarity with vector databases, embeddings, and semantic similarity search. (ref:hirist.tech)
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough