Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 4.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities AsProcessAnalyst- Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Recording and maintaining PO and Non-PO Invoices and handling both manual and automatic payment requests. Involved in end-to-end Vendor Master activities like creation, changes, verification, cleansing, and identifying duplicate records. Collaborate with stakeholders for coding and approvals, address blocked invoice issues, and ensure timely posting in accounting software for payments and expenses. Handle the processing of travel and expense claims, manage payments, resolve duplicate payment issues, recover funds, and execute payment proposals. Adhere to client Service Level Agreements (SLAs) and meet the specified timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 2-4 years of experience in Accounts Payable. Experience in Invoice and Vendor management along with Resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Self-directed and ambitious achiever. Meeting targets effectively. Skilled in thriving under deadlines and contributing to change management, showcasing strong interpersonal teamwork.
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities AsProcessAnalyst- Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Recording and maintaining PO and Non-PO Invoices and handling both manual and automatic payment requests. Involved in end-to-end Vendor Master activities like creation, changes, verification, cleansing, and identifying duplicate records. Collaborate with stakeholders for coding and approvals, address blocked invoice issues, and ensure timely posting in accounting software for payments and expenses. Handle the processing of travel and expense claims, manage payments, resolve duplicate payment issues, recover funds, and execute payment proposals. Adhere to client Service Level Agreements (SLAs) and meet the specified timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 2-4 years of experience in Accounts Payable. Experience in Invoice and Vendor management along with Resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Self-directed and ambitious achiever. Meeting targets effectively. Skilled in thriving under deadlines and contributing to change management, showcasing strong interpersonal teamwork.
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
Senior Software Engineer I ( MS SQL Server Database Engineer ) Job Description Opportunity Summary: We are looking for an enthusiastic and dynamic individual with an overall experience of 5+ years to join Upland India as a Senior Software Engineer in Microsoft SQL Server for our Upland PSA products. What would you do? Develop, test, and maintain the database code Create or modify database objects and/or write SQL code in support of application needs Monitor and improve database performance and capacity Ensure that database systems are safeguarded and implement necessary security measures to ensure data integrity Assist with upgrading database for new version release when needed Syncing data across data sources Conduct research on emerging database and application development software products, languages, and standards in support of development efforts What are we looking for? Technical Skills The following skills are needed for this role. Experience: Must have Expert level (5+ years) hands-on database development experience and skills and database maintenance experience) in MS SQL Server Primary Skills: The candidate must possess the following primary skills: Excellent (5+ years) hands-on database experience and skills in developing MS SQL Server Stored Procedures, functions, triggers, queries, scripts, etc. Excellent (3+ years) hands-on MS SQL Server database experience in troubleshooting, performance tuning, debugging, and query optimization in MS SQL Server Excellent (3+ years) in indexing, finding and resolving potential for database deadlocks Experience in MS SQL Server database migrations and upgrades. Has experience with analytics and reporting Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Have a MS SQL Server / Azure SQL certification OR are willing to obtain in a short time frame. Nice to have experience with: ETL, Azure SQL Soft Skills: Strong writing skills are essential, as is the ability to work effectively in a fully remote team without the need for a physical office. The ideal candidate thrives in a collaborative team environment with a diverse range of people and is passionate about delivering an amazing customer experience. They should be adaptable, capable of changing their mind and influencing others Growth Skills: The candidate should have a strong work ethic, be a self-starter with a desire to grow, and consistently seek better ways to accomplish tasks. Qualification Bachelor’s degree or technical institute degree/certificate in Computer Science, Information Systems, or other related field or equivalent combination of knowledge and experience. This role requires overlap with multiple time zones for planning meetings, status updates etc. on a regular basis. The duration of these overlaps can change depending on the type of meeting. Upland India has the flexibility to manage your working hours accordingly to help in your work-life balance. You can find out more about this during your interview conversation. Upland Software is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status or other legally protected status. About [Upland PSA] Upland PSA focuses on planning, execution and financial management of projects. This includes project related business functions such as Timesheet & Leave Request Management, Expense Entry & Approvals, Project Planning & Scheduling, Resource Allocation, and Invoicing. About Upland Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually. Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities As Senior Process Analyst - Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Involved in creating, modifying, verifying, and cleansing the Vendor Master. Identify duplicate records for the Vendor Master and ensure accurate maintenance of invoice receipt, verification, and processing. Recording of invoices both Purchase Order based, and Non-Purchase Order based (Un-supported Invoices), Coordinate with various stakeholders, obtaining coding, approval, and resolving issues around blocked invoices. Ensuring that payment and expense entries are promptly recorded in the accounting software, encompassing both manual and automatic payment requests. Process travel and expense claims, manage payments, resolve duplicate payments, recover funds, and verify and execute payment proposals. Involved in handling queries for vendor statement reconciliation through calls and emails. Adhere to client SLAs (Service Level Agreements) and timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 3-6 years of experience in Accounts Payable. Experience in invoice and vendor management along with resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Ambitious individual who can work under their direction towards agreed targets/goals. Ability to work under tight timelines and have been part of change management initiatives. Proven interpersonal skills while contributing to team effort by accomplishing related results as needed. Enhance technical skills by attending educational workshops, reviewing publications etc.
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities As Senior Process Analyst - Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Involved in creating, modifying, verifying, and cleansing the Vendor Master. Identify duplicate records for the Vendor Master and ensure accurate maintenance of invoice receipt, verification, and processing. Recording of invoices both Purchase Order based, and Non-Purchase Order based (Un-supported Invoices), Coordinate with various stakeholders, obtaining coding, approval, and resolving issues around blocked invoices. Ensuring that payment and expense entries are promptly recorded in the accounting software, encompassing both manual and automatic payment requests. Process travel and expense claims, manage payments, resolve duplicate payments, recover funds, and verify and execute payment proposals. Involved in handling queries for vendor statement reconciliation through calls and emails. Adhere to client SLAs (Service Level Agreements) and timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 3-6 years of experience in Accounts Payable. Experience in invoice and vendor management along with resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Ambitious individual who can work under their direction towards agreed targets/goals. Ability to work under tight timelines and have been part of change management initiatives. Proven interpersonal skills while contributing to team effort by accomplishing related results as needed. Enhance technical skills by attending educational workshops, reviewing publications etc.
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities AsProcessAnalyst- Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Recording and maintaining PO and Non-PO Invoices and handling both manual and automatic payment requests. Involved in end-to-end Vendor Master activities like creation, changes, verification, cleansing, and identifying duplicate records. Collaborate with stakeholders for coding and approvals, address blocked invoice issues, and ensure timely posting in accounting software for payments and expenses. Handle the processing of travel and expense claims, manage payments, resolve duplicate payment issues, recover funds, and execute payment proposals. Adhere to client Service Level Agreements (SLAs) and meet the specified timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 2-4 years of experience in Accounts Payable. Experience in Invoice and Vendor management along with Resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Self-directed and ambitious achiever. Meeting targets effectively. Skilled in thriving under deadlines and contributing to change management, showcasing strong interpersonal teamwork.
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities AsProcessAnalyst- Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Recording and maintaining PO and Non-PO Invoices and handling both manual and automatic payment requests. Involved in end-to-end Vendor Master activities like creation, changes, verification, cleansing, and identifying duplicate records. Collaborate with stakeholders for coding and approvals, address blocked invoice issues, and ensure timely posting in accounting software for payments and expenses. Handle the processing of travel and expense claims, manage payments, resolve duplicate payment issues, recover funds, and execute payment proposals. Adhere to client Service Level Agreements (SLAs) and meet the specified timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 2-4 years of experience in Accounts Payable. Experience in Invoice and Vendor management along with Resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Self-directed and ambitious achiever. Meeting targets effectively. Skilled in thriving under deadlines and contributing to change management, showcasing strong interpersonal teamwork.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Candidate should have worked the domains of AP – Indexing,Vendor management , Help desk and Payments. Candidate should have knowledge of al the three way match , PO non PO invoices , GR/IR etc. Candidate should have worked on of SAP FICO in invoicing , Indexing and payment run. Candidate should have the ability to scan through contracts and match relevant terms and conditions with the invoice Candidate should have experience in workflow management, and should be able to validate and identify right cost center and GL accounts Candidate should have experience in vendor reconciliations, daily listing of available invoices in payment review stage Candidate should have worked on accrual accounting of import cost GR/IR clearing Should have knowledge of parking and posting of invoices in SAP Candidate should have knowledge of archiving of invoices in SAP and other archiving module. Ability to consistently look for ways to improve and develop efficiencies and assist with them in the account payable process. Show more Show less
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. Youll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, youll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities AsProcessAnalyst- Procure to Pay (P2P), you are responsible for Invoice processing, Vendor master management, Query resolution, Indexing, and Invoice reconciliation. You should be flexible to work in shifts. Your primary responsibilities include: Recording and maintaining PO and Non-PO Invoices and handling both manual and automatic payment requests. Involved in end-to-end Vendor Master activities like creation, changes, verification, cleansing, and identifying duplicate records. Collaborate with stakeholders for coding and approvals, address blocked invoice issues, and ensure timely posting in accounting software for payments and expenses. Handle the processing of travel and expense claims, manage payments, resolve duplicate payment issues, recover funds, and execute payment proposals. Adhere to client Service Level Agreements (SLAs) and meet the specified timelines. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Commerce graduate with a minimum of 2-4 years of experience in Accounts Payable. Experience in Invoice and Vendor management along with Resolving queries, and Invoice reconciliation. Proven work knowledge to manage payment reporting and reconciliation activities. Preferred technical and professional experience Proficient in MS Office applications and any ERP software as an end-user. Self-directed and ambitious achiever. Meeting targets effectively. Skilled in thriving under deadlines and contributing to change management, showcasing strong interpersonal teamwork.
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Overview: We are seeking a skilled SEO Executive to drive organic growth and enhance search engine visibility for our B2B SaaS products. The ideal candidate will have a strong understanding of SEO strategies, technical SEO, content optimization, and link-building techniques tailored for SaaS and B2B audiences. Key Responsibilities: Develop and execute SEO strategies to improve organic rankings and drive targeted traffic for SaaS products. Conduct keyword research and competitor analysis to identify high-value opportunities. Optimize on-page elements, including meta titles, descriptions, headers, and structured data . Work closely with content writers to create SEO-friendly blogs, landing pages, and pillar content . Manage Technical SEO , including site audits, crawling, indexing, and page speed optimization. Implement and oversee off-page SEO strategies , such as backlink building, guest posting, and digital PR. Monitor and analyze SEO performance using Google Analytics, Google Search Console, Ahrefs, SEMrush, Microsoft Clarity, and other SEO tools. Collaborate with the marketing and product teams to align SEO strategies with overall business goals. Stay updated with the latest Google algorithm updates and industry best practices. Required Skills & Qualifications: 3+ years of experience in SEO, preferably in a B2B SaaS environment. Strong knowledge of on-page, off-page, and technical SEO. Strong Experience in Implementing Programmatic SEO. Strong Experience in Cracking AI-Overview Results & Also in A.I Searches (Blog & Commercial Pages). Experience with Google Search Console, Google Analytics, Ahrefs, SEMrush, Moz, Screaming Frog, and Microsoft Clarity. Ability to conduct detailed SEO audits and implement necessary improvements. Familiarity with content marketing and link-building strategies. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Delhi, India
On-site
We are looking for a passionate and experienced Senior Full Stack Developer with strong hands-on expertise in Angular 16+, .NET Core 6, SQL Server, and Selenium-based test automation. The ideal candidate will be responsible for designing, developing, and maintaining scalable web applications while ensuring high-quality test coverage. Experience with AWS services is a plus. Key Responsibilities: Develop robust and scalable web applications using Angular 16+ and .NET Core 6. Design and optimize relational databases using SQL Server. Implement and maintain automated test cases using Selenium or similar frameworks. Participate in all phases of the software development lifecycle including requirements, design, development, testing, and deployment. Collaborate with cross-functional teams to define, design, and deliver new features. Identify and troubleshoot application issues and bugs, and optimize application performance. Write clean, maintainable, and efficient code following industry best practices. (Optional) Leverage AWS services for deployment, monitoring, and scaling where applicable. Required Skills: Frontend: Angular 16+ Backend: .NET Core 6 Database: SQL Server (Stored procedures, performance tuning, indexing) Testing: Selenium or similar automation testing tools (experience in writing and maintaining test scripts) Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As a Senior Data Science Engineer in IOL’s Data Team, you will lead the development of advanced predictive models to power a smart caching layer for our B2B hospitality marketplace. Handling an unprecedented scale of data—2 billion searches, 1 billion price verifications, and 100 million bookings daily—you will design machine learning solutions to predict search patterns and prefetch data from 3P suppliers, reducing their infrastructure load and improving system reliability. This role demands deep expertise in big data, machine learning, and distributed systems, as well as the ability to architect scalable, data-driven solutions in a fast-paced environment. The Challenge IOL operates a high-traffic B2B marketplace that matches hotel room supply with demand. Our platform processes: Searches : 2 billion daily queries for hotel prices based on hotel ID, room type, check-in date, length of stay, and party size. Price Verifications : 1 billion daily checks to confirm pricing. Bookings : 100 million daily bookings. Key Responsibilities Predictive Modeling : Design and implement machine learning models to predict high-demand search patterns based on historical data (e.g., hotel IDs, room types, dates, and party sizes). Big Data Processing : Develop scalable data pipelines to process and analyze massive datasets (2 billion searches daily) using distributed computing frameworks. Smart Caching Layer : Architect and optimize a predictive cache prefetcher that proactively populates the cache cluster (Redis) with high-value data during 3P off- peak hours. Data Analysis : Leverage Elasticsearch and ES Searches Log to extract insights from search patterns, seasonal trends, and user behavior. Model Optimization : Continuously refine predictive models to handle the massive permutations of search parameters, ensuring high accuracy and low latency. Collaboration : Work with the Data Team, platform engineers, and 3P proxy teams to integrate models into the existing architecture (Load Balancer, API Gateway, Service Router, Cache Cluster). Performance Monitoring : Monitor cache hit/miss ratios, model accuracy, and system performance, using tools like Cache Stats Collector to drive optimization. Scalability : Ensure models and pipelines scale horizontally to handle increasing data volumes and traffic spikes. Innovation : Stay updated on advancements in machine learning, big data, and distributed systems, proposing novel approaches to enhance predictive capabilities. Required Skills & Qualifications Education : Master’s or Ph.D. in Data Science, Computer Science, Statistics, or a related field. Experience : o 7+ years of experience in data science, with a focus on machine learning and predictive modeling. o 5+ years of hands-on experience processing and analyzing big data sets (terabyte-scale or larger) in distributed environments. o Proven track record of building and deploying machine learning models in production for high-traffic systems. Technical Skills : o Deep expertise in machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and algorithms (e.g., regression, clustering, time-series forecasting, neural networks). o Extensive experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka) for distributed data processing. o Proficiency in Elasticsearch for search and analytics, including querying and indexing large datasets. o Strong programming skills in Python, with experience in data science libraries (e.g., Pandas, NumPy, Dask). o Familiarity with Redis or similar in-memory data stores for caching. o Knowledge of cloud platforms (e.g., AWS, Azure, GCP) for deploying and scaling data pipelines.o Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) for data extraction and transformation. o Proficiency in designing and optimizing data pipelines for high-throughput, low-latency systems. Problem-Solving : Exceptional ability to tackle complex problems, such as handling massive permutations of search parameters and predicting trends in dynamic datasets. Communication : Strong written and verbal communication skills to collaborate with cross-functional teams and present insights to stakeholders. Work Style : Self-motivated, proactive, and able to thrive in a fast-paced, innovative environment. Preferred Skills Experience in the hospitality or travel industry, particularly with search or booking systems. Familiarity with real-time data streaming and event-driven architectures (e.g., Apache Kafka, Flink). Knowledge of advanced time-series forecasting techniques for seasonal and cyclical data. Exposure to reinforcement learning or online learning for dynamic model adaptation. Experience optimizing machine learning models for resource-constrained environments (e.g., edge devices or low-latency systems). Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At DigitalOcean, we're not just simplifying cloud computing - we're revolutionizing it. We serve the developer community and the businesses they build with a relentless pursuit of simplicity. With our customers at the heart of what we do - and powered by a diverse culture that values boldness, speed, simplicity, ownership, and a growth mindset - we are committed to building truly useful products. Come swim with us! Position Overview We are looking for a Software Engineer who is passionate about writing clean, maintainable code and eager to contribute to the success of our platform.As a Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing.We’re looking for an experienced Software Engineer II to join our growing engineering team. You’ll work on building and maintaining features that directly impact our users, from creating scalable backend systems to improving performance for thousands of customers. What You’ll Do Design, develop, and maintain backend systems and services that power our platform. Collaborate with cross-functional teams to design and implement new features, ensuring the best possible developer experience for our users. Troubleshoot complex technical problems and find efficient solutions in a timely manner. Write high-quality, testable code, and contribute to code reviews to maintain high standards of development practices. Participate in architecture discussions and contribute to the direction of the product’s technical vision. Continuously improve the reliability, scalability, and performance of the platform. Participate in rotating on-call support, providing assistance with production systems when necessary. Mentor and guide junior engineers, helping them grow technically and professionally. What You’ll Add To DigitalOcean A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in at least one modern programming language (e.g., Go, Python, Ruby, Java, etc.), with a strong understanding of data structures, algorithms, and software design principles. Hands-on experience with cloud computing platforms and infrastructure-as-code practices. Strong knowledge of RESTful API design and web services architecture. Demonstrated ability to build scalable and reliable systems that operate in production at scale. Excellent written and verbal communication skills to effectively collaborate with teams. A deep understanding of testing principles and the ability to write automated tests that ensure the quality of code. Familiarity with agile methodologies, including sprint planning, continuous integration, and delivery. Knowledge of advanced database concepts such as sharding, indexing, and performance tuning. Exposure to monitoring and observability tools such as Prometheus, Grafana, or ELK Stack. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with Kubernetes, Docker, and other containerization/orchestration tools. Why You’ll Like Working For DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a multi-skilled, multi-faceted Technical Development Lead with a deep expertise in Large Language models, Generative AI, and Full-Stack Cloud Native Application Development, to join our dynamic product development team. As a Development Lead, you will play a key role in developing cutting-edge products and innovative solutions for our clients, combining the power of LLMs, Generative AI, and Agentic AI together with Cloud Native Full Stack Application Development. Your primary focus will be on driving bespoke product development to build creative and impactful solutions, that enhance the product portfolio. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of technology, as well as rapidly learning new skills and technology on the job. The ideal candidate will also combine traditional product development and cloud native app dev skills with modern and emerging Generative AI and LLM App development skills. Job Description: Responsibilities : Develop, implement and optimize scalable AI-enabled products, cloud-native apps and cloud solutions. Technical delivery and execution of applications involving Cloud Platforms, Cloud Native Apps, and Cloud AI Services Drive solution tech design and implementation across all layers of the application stack – including front-end, back-end, APIs, data and AI services Design and build enterprise products and full-stack applications on the MERN stack, with clear separation of concerns across layers Design and build web apps and solutions that leverage LLM models, and Generative AI workflows Leverage Multi modal AI capabilities supporting all content types and modalities, including text, imagery, audio, speech and video Constantly Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Drive product development and delivery within tight timelines Collaborate with full-stack developers, engineers, and quality engineers, to develop and integrate solutions into existing enterprise products. Collaborate with technology leaders and cross-functional teams to develop and validate client requirements and rapidly translate them into working solutions. Key Skills required : Full Stack MERN App Dev, Front-End + Back-End Development, API Dev, Micro Services Cloud Native App Dev, Cloud Solutions LLM, LLM App Dev AI Agents, Agentic AI Workflows Generative AI, Multi Modal AI , Creative AI Working with Text, Imagery, Speech, Audio and Video AI Must-Have capabilities: Strong Expertise in MERN stack (JavaScript) including client-side and server-side JavaScript Strong Expertise in Python based development, including Python App Dev for LLM integration Well-rounded in both programming languages Hands-on Experience in front-end and back-end development Hands-on Experience in Data Processing and Data Integration Hands-on Experience in API integration Hands-on Experience in LLM App Dev and LLM enabled solutions. Hands-on Experience in Multi modal AI models and tools. JavaScript / MERN stack - competencies : Minimum 4 years hands-on experience in working with Full-Stack MERN apps, using both client-side and server-side JavaScript Strong experience in client-side JavaScript Apps and building Static Web Apps + Dynamic Web Apps both in JavaScript Strong hands-on experience in the React.js framework, and building stateful and stateless front-end apps using React.js components Strong hands-on experience in Server-Side JavaScript, and using frameworks like Node.js, to build services and APIs in JavaScript Good experience with a Micro Services solution, and how to build the same with Node.js Gen-AI / LLM App Dev with Python – competencies : Minimum 2 years hands-on experience in Python development Minimum 2 years hands-on experience in working with LLMs and LLM models Strong experience with integrating data, both internal + external datasets, and building data pipelines, to ground LLMs in domain knowledge Strong hands-on experience with Data Pre-Processing and Processing for LLM Apps and solutions Solid Hands-on Experience with building end-to-end RAG pipelines and custom AI indexing solutions to ground LLMs and enhance LLM output Good Experience with building AI and LLM enabled Workflows Hands-on Experience integrating LLMs with external tools such as Web Search Ability to leverage advanced concepts such as tool calling and function calling, with LLM models Hands-on Experience with using LLMs for Research use cases and Research Workflows, to enable AI Research Assistants Hands-on Experience with Conversational AI solutions and chat-driven experiences Experience with multiple LLMs and models – primarily GPT-4o, GPT o1, and o3 mini, and preferably also Gemini, Claude Sonnet, etc. Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and perferably also AWS Bedrock, and/or GCP Vertex AI. Experience with vector databases (Azure AI Search, AWS OpenSearch Serverless, pgvector, etc.). Hands-on Experience with Assistants and the use of Assistants in orchestrating with LLMs Hands-on Experience working with AI Agents and Agent Services. Implement LLMOps processes, LLM evaluation frameworks, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to output evaluation. Multi Modal AI – competencies : Hands-on Experience with intelligent document processing and document indexing + document content extraction and querying, using multi modal AI Models Hands-on Experience with using Multi modal AI models and solutions for Imagery and Visual Creative – including text-to-image, image-to-image, image composition, image variations, etc. Hands-on Experience with Computer Vision and Image Processing using Multi-modal AI – for use cases such as object detection, automated captioning, etc. Hands-on Experience with using Multi modal AI for Speech – including Text to Speech, Pre-built vs. Custom Voices Hands-on Experience with building Voice-enabled and Voice-activated experiences, using Speech AI and Voice AI solutions Hands-on Experience with leveraging APIs to orchestrate across Multi Modal AI models Ability to lead design and development teams, for Full-Stack MERN Apps and Products/Solutions, built on top of LLMs and LLM models. Nice-to-Have capabilities : MERN Stack and Cloud-Native App Dev : Hands-on working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js Hands-on working experience with BFF frameworks such as GraphQL Hands-on working experience working with a Federated Graph architecture Hands-on working experience with API Management and API Gateways Experience working with container apps and containerized environments Hands-on working experience with Web Components and Portable UI components Python / ML LLM / Gen-AI App Dev : Hands-on Experience with building Agentic AI workflows that enable iterative improvement of output Hands-on experience with both Single-Agent and Multi-Agent Orchestration solutions and frameworks Hands-on experience with different Agent communication and chaining patterns Ability to leverage LLMs for Reasoning and Planning workflows, that enable higher order “goals” and automated orchestration across multiple apps and tools Ability to leverage Graph Databases and “Knowledge Graphs” as an alternate method / replacement of Vector Databases, for enabling more relevant semantic querying and outputs via LLM models. Good Background with Machine Learning solutions Good foundational understanding of Transformer Models Some Experience with custom ML model development and deployment is desirable. Proficiency in deep learning frameworks such as PyTorch, or Keras. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Delhi, India
On-site
Job Description Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Computer vision(yolo),Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock OR Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain OR LlamaIndex OR RAG) Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines (ETL Gluejob, Quicksight) for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good To Have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Equal Opportunity Employer Pentair is an Equal Opportunity Employer. With our expanding global presence, cross-cultural insight and competence are essential for our ongoing success. We believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Job Description We are seeking a highly skilled and customer-focused GraphDB / Neo4J Solutions Engineer to join our team. This role is responsible for delivering high-quality solution implementation to our customers to implement GraphDB based product and collaborating with cross-functional teams to ensure customer success. Solution lead is expected to provide in-depth solutions on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. Solution lead must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). Roles and Responsibilities Collaborate with core engineering, Customers and solution engineering teams for functional and technical discovery sessions . Prepare product and live software demonstrations Create and maintain public documentation, internal knowledge base articles, and FAQs. Ability to design efficient graph schemas and develop prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs). Knowledge of indexing strategies , partitioning , and query optimization in GraphDB. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Education and Experience Education: B.Tech in computer engineering, Information Technology, or related field. Experience: 5+ years of experience in a Solution Lead role on Data based Software Product such as GraphDB, Neo4J Must Have Skills SQL Expertise: 4+ years of experience in SQL for database querying, performance tuning, and debugging. Graph Databases and GraphDB platforms: 4+ years of hands on experience with Neo4j, or similar graph database systems. Scripting & Automation: 4+ years with strong skills in C, C++, Python for automation, task management, and issue resolution. Virtualization and Cloud knowledge : 4+ years with Azure, GCP or AWS. Management skills : 3+ years Experience with data requirements gathering and data modeling, white boarding and developing/validating proposed solution architectures. The ability to communicate complex information and concepts to prospective users in a clear and effective way. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Experienced Chennai Posted 24 hours ago Solvedge We’re dedicated to leveraging technology to make a positive impact in healthcare. Our software solutions are crafted to optimize processes, support patient care, and drive better health outcomes. As we continue to innovate, we’re seeking an experienced PostgreSQL Developer to join our team. If you’re enthusiastic about scalable database development and eager to contribute to meaningful healthcare technology projects, we want you on our journey to empower healthcare professionals with advanced tools and insights.. What You’ll Do We are looking for a skilled and detail-oriented PostgreSQL Developer with 4–6 years of hands-on experience to join our dynamic engineering team. In this role, you will be responsible for designing developing, and optimizing PostgreSQL databases that power high-performance applications in the healthcare sector. You will collaborate with architects, backend engineers, and business analysts to deliver reliable and scalable data solutions Responsibilities Database Development and Optimization Design and implement efficient PostgreSQL schemas, indexes, constraints, and relationships. Develop advanced SQL queries, stored procedures, views, and triggers using PostgreSQL. Optimize complex queries and database performance for scalability and speed. Perform data profiling, query tuning, and performance analysis. Data Architecture and Modeling Create and maintain logical and physical data models based on business requirements. Define standards for data consistency, normalization, and integrity. Implement data validation rules and constraints to ensure data accuracy. Integration and Collaboration Collaborate with backend developers to ensure seamless data access through APIs and services. Design and implement ETL processes for internal data flows and external data ingestion. Work with cross-functional teams to translate business requirements into database logic. Tools and Automation Utilize tools for database versioning (e.g., Flyway, Liquibase). Automate database deployments and migrations within CI/CD pipelines. Continuous Improvement Monitor emerging PostgreSQL features and best practices. Recommend and implement improvements in data design, coding practices, and performance strategy. Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent technical field. 4–6 years of professional experience with PostgreSQL database development. Experience working in Agile/Scrum environments. Exposure to microservices and cloud-native applications is an advantage. Primary Skills PostgreSQL: Strong proficiency in PostgreSQL and advanced SQL. SQL Development: Experience building reusable stored procedures, functions, views, CTEs, and triggers. Performance Tuning: Expertise in optimizing complex queries using indexing, execution plans, and materialized views. Schema Design: In-depth knowledge of data modeling, normalization, and relational design. Data Integration: Experience with data pipelines, ETL processes, and transforming structured/semi-structured data. JSON/JSONB: Practical experience working with unstructured data and PostgreSQL’s advanced JSON features. ORMs: Experience integrating PostgreSQL with ORMs such as Sequelize, Hibernate, or SQLAlchemy. Secondary Skills Experience working with cloud-based PostgreSQL (e.g., AWS RDS, Azure Database for PostgreSQL). Familiarity with RESTful APIs and backend service integration. Working knowledge of NoSQL alternatives, hybrid storage strategies, or data lakes. CI/CD and DevOps understanding for integrating DB updates into pipelines. Strong analytical and debugging skills. Effective communication and documentation abilities to interact with stakeholders. Why Apply? Even if you feel you don’t meet every single requirement, we encourage you to apply. We’re looking for passionate individuals who may bring diverse perspectives and skills to our team. At SolvEdge, we value talent and dedication and are committed to fostering growth within our organization. How to Apply? Ready to make a difference? Submit your resume, a cover letter that highlights your qualifications, and any relevant experience. We look forward to hearing from you! SolvEdge is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. About SolvEdge Solvedge: Pioneering the Future of Digital Healthcare Our Expertise SOLVEDGE stands at the forefront of digital healthcare innovation as a premier healthcare performance company. With over 18 years of dedicated service in the healthcare industry, we specialize in a digital care journey platform that revolutionizes how hospitals and health systems engage, monitor, and connect with patients throughout their healthcare experiences. Our partnership with Fortune 100 medical device companies and hospitals nationwide underscores our position as a trusted partner in healthcare solutions. Key Features of SOLVEDGE Our Platform Is Designed To Empower Healthcare Providers With The Tools They Need To Automate And Streamline Care Delivery, Thereby Improving Clinical Outcomes And Patient Satisfaction Personalized Care Plans: Leveraging evidence-based data, SOLVEDGE delivers digital care plans customized to meet the individual needs and conditions of each patient. Real-Time Patient Monitoring: Through daily health checks, assessment, surveys, and integration with wearable devices, our platform facilitates continuous monitoring of patient health. Automated Care Delivery: We automate essential tasks, including appointment scheduling, sending reminders, and delivering educational content, to enhance patient engagement and reduce administrative tasks. Remote Patient Monitoring: Healthcare providers can monitor vital signs, symptoms, and treatment plan adherence remotely, enabling timely interventions and proactive care management. The SOLVEDGE Advantage Our platform offers significant benefits to healthcare providers and patients alike: Improved Clinical Outcomes: By facilitating more effective care pathways and enabling early intervention, SOLVEDGE contributes to reduced readmission rates, fewer emergency department visits, and shorter hospital stays. Enhanced Patient Satisfaction: Patients enjoy a higher quality of care with SOLVEDGE, benefiting from improved communication, comprehensive education, and continuous support. Cost Savings: Healthcare organizations can achieve substantial cost reductions by minimizing unnecessary readmission, emergency visits, and complications associated with poor care management. Applications and Impact SOLVEDGE’s versatility allows for its application across various aspects of healthcare, with a particular emphasis on surgical care. From preparing patients for surgery to monitoring their post-operative recovery, our platform ensures a seamless and supportive care journey. Beyond surgical care, our focus encompasses managing care pathways, enhancing patient engagement through patient-reported outcomes, providing advanced data analytic, integrating with electronic medical records (EMR), and streamlining billing processes. Our comprehensive approach addresses the myriad challenges faced by today’s healthcare industry, backed by our commitment to excellence in service, communication, and customer experience. A Trusted Partner in Healthcare Innovation Our strategic relationships and deep understanding of healthcare challenges have positioned us as an indispensable ally to healthcare providers nationwide. As we continue to develop innovative solutions, our goal remains unchanged: to simplify healthcare delivery, improve patient outcomes, and enhance the overall patient experience. Job Features Job Category Developer Apply For This Job Attach Resume* No file chosen Browse Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
JD for a Databricks Data Engineer Key Responsibilities: Design, develop, and maintain high-performance data pipelines using Databricks and Apache Spark. Implement medallion architecture (Bronze, Silver, Gold layers) for efficient data processing. Optimize Delta Lake tables, partitioning, Z-ordering, and performance tuning in Databricks. Develop ETL/ELT processes using PySpark, SQL, and Databricks Workflows. Manage Databricks clusters, jobs, and notebooks for batch and real-time data processing. Work with Azure Data Lake, AWS S3, or GCP Cloud Storage for data ingestion and storage. Implement CI/CD pipelines for Databricks jobs and notebooks using DevOps tools. Monitor and troubleshoot performance bottlenecks, cluster optimization, and cost management. Ensure data quality, governance, and security using Unity Catalog, ACLs, and encryption. Collaborate with Data Scientists, Analysts, and Business Teams to deliver insights. Required Skills & Experience: 5+ years of hands-on experience in Databricks, Apache Spark, and Delta Lake. Strong SQL, PySpark, and Python programming skills. Experience in Azure Data Factory (ADF), AWS Glue, or GCP Dataflow. Expertise in performance tuning, indexing, caching, and parallel processing. Hands-on experience with Lakehouse architecture and Databricks SQL. Strong understanding of data governance, lineage, and cataloging (e.g., Unity Catalog). Experience with CI/CD pipelines (Azure DevOps, GitHub Actions, or Jenkins). Familiarity with Airflow, Databricks Workflows, or orchestration tools. Strong problem-solving skills with experience in troubleshooting Spark jobs. Nice to Have: Hands-on experience with Kafka, Event Hubs, or real-time streaming in Databricks. Certifications in Databricks, Azure, AWS, or GCP. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a multi-skilled, multi-faceted Technical Development Lead with a deep expertise in Large Language models, Generative AI, and Full-Stack Cloud Native Application Development, to join our dynamic product development team. As a Development Lead, you will play a key role in developing cutting-edge products and innovative solutions for our clients, combining the power of LLMs, Generative AI, and Agentic AI together with Cloud Native Full Stack Application Development. Your primary focus will be on driving bespoke product development to build creative and impactful solutions, that enhance the product portfolio. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of technology, as well as rapidly learning new skills and technology on the job. The ideal candidate will also combine traditional product development and cloud native app dev skills with modern and emerging Generative AI and LLM App development skills. Job Description: Responsibilities : Develop, implement and optimize scalable AI-enabled products, cloud-native apps and cloud solutions. Technical delivery and execution of applications involving Cloud Platforms, Cloud Native Apps, and Cloud AI Services Drive solution tech design and implementation across all layers of the application stack – including front-end, back-end, APIs, data and AI services Design and build enterprise products and full-stack applications on the MERN stack, with clear separation of concerns across layers Design and build web apps and solutions that leverage LLM models, and Generative AI workflows Leverage Multi modal AI capabilities supporting all content types and modalities, including text, imagery, audio, speech and video Constantly Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Drive product development and delivery within tight timelines Collaborate with full-stack developers, engineers, and quality engineers, to develop and integrate solutions into existing enterprise products. Collaborate with technology leaders and cross-functional teams to develop and validate client requirements and rapidly translate them into working solutions. Key Skills required : Full Stack MERN App Dev, Front-End + Back-End Development, API Dev, Micro Services Cloud Native App Dev, Cloud Solutions LLM, LLM App Dev AI Agents, Agentic AI Workflows Generative AI, Multi Modal AI , Creative AI Working with Text, Imagery, Speech, Audio and Video AI Must-Have capabilities: Strong Expertise in MERN stack (JavaScript) including client-side and server-side JavaScript Strong Expertise in Python based development, including Python App Dev for LLM integration Well-rounded in both programming languages Hands-on Experience in front-end and back-end development Hands-on Experience in Data Processing and Data Integration Hands-on Experience in API integration Hands-on Experience in LLM App Dev and LLM enabled solutions. Hands-on Experience in Multi modal AI models and tools. JavaScript / MERN stack - competencies : Minimum 4 years hands-on experience in working with Full-Stack MERN apps, using both client-side and server-side JavaScript Strong experience in client-side JavaScript Apps and building Static Web Apps + Dynamic Web Apps both in JavaScript Strong hands-on experience in the React.js framework, and building stateful and stateless front-end apps using React.js components Strong hands-on experience in Server-Side JavaScript, and using frameworks like Node.js, to build services and APIs in JavaScript Good experience with a Micro Services solution, and how to build the same with Node.js Gen-AI / LLM App Dev with Python – competencies : Minimum 2 years hands-on experience in Python development Minimum 2 years hands-on experience in working with LLMs and LLM models Strong experience with integrating data, both internal + external datasets, and building data pipelines, to ground LLMs in domain knowledge Strong hands-on experience with Data Pre-Processing and Processing for LLM Apps and solutions Solid Hands-on Experience with building end-to-end RAG pipelines and custom AI indexing solutions to ground LLMs and enhance LLM output Good Experience with building AI and LLM enabled Workflows Hands-on Experience integrating LLMs with external tools such as Web Search Ability to leverage advanced concepts such as tool calling and function calling, with LLM models Hands-on Experience with using LLMs for Research use cases and Research Workflows, to enable AI Research Assistants Hands-on Experience with Conversational AI solutions and chat-driven experiences Experience with multiple LLMs and models – primarily GPT-4o, GPT o1, and o3 mini, and preferably also Gemini, Claude Sonnet, etc. Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and perferably also AWS Bedrock, and/or GCP Vertex AI. Experience with vector databases (Azure AI Search, AWS OpenSearch Serverless, pgvector, etc.). Hands-on Experience with Assistants and the use of Assistants in orchestrating with LLMs Hands-on Experience working with AI Agents and Agent Services. Implement LLMOps processes, LLM evaluation frameworks, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to output evaluation. Multi Modal AI – competencies : Hands-on Experience with intelligent document processing and document indexing + document content extraction and querying, using multi modal AI Models Hands-on Experience with using Multi modal AI models and solutions for Imagery and Visual Creative – including text-to-image, image-to-image, image composition, image variations, etc. Hands-on Experience with Computer Vision and Image Processing using Multi-modal AI – for use cases such as object detection, automated captioning, etc. Hands-on Experience with using Multi modal AI for Speech – including Text to Speech, Pre-built vs. Custom Voices Hands-on Experience with building Voice-enabled and Voice-activated experiences, using Speech AI and Voice AI solutions Hands-on Experience with leveraging APIs to orchestrate across Multi Modal AI models Ability to lead design and development teams, for Full-Stack MERN Apps and Products/Solutions, built on top of LLMs and LLM models. Nice-to-Have capabilities : MERN Stack and Cloud-Native App Dev : Hands-on working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js Hands-on working experience with BFF frameworks such as GraphQL Hands-on working experience working with a Federated Graph architecture Hands-on working experience with API Management and API Gateways Experience working with container apps and containerized environments Hands-on working experience with Web Components and Portable UI components Python / ML LLM / Gen-AI App Dev : Hands-on Experience with building Agentic AI workflows that enable iterative improvement of output Hands-on experience with both Single-Agent and Multi-Agent Orchestration solutions and frameworks Hands-on experience with different Agent communication and chaining patterns Ability to leverage LLMs for Reasoning and Planning workflows, that enable higher order “goals” and automated orchestration across multiple apps and tools Ability to leverage Graph Databases and “Knowledge Graphs” as an alternate method / replacement of Vector Databases, for enabling more relevant semantic querying and outputs via LLM models. Good Background with Machine Learning solutions Good foundational understanding of Transformer Models Some Experience with custom ML model development and deployment is desirable. Proficiency in deep learning frameworks such as PyTorch, or Keras. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Navi Mumbai, Maharashtra
On-site
Job Summary: We are seeking an experienced and strategic SEO Manager / SEO Team Lead to lead our SEO initiatives and manage a team of SEO professionals. You will be responsible for planning, executing, and optimizing SEO strategies that increase organic visibility, traffic, and lead generation for our digital assets. This role requires strong leadership, analytical skills, and up-to-date SEO knowledge. Key Responsibilities: Develop and implement comprehensive SEO strategies across multiple websites. Lead, mentor, and manage a team of SEO specialists, analysts, and content strategists. Conduct keyword research, competitor analysis, and market trends to identify SEO opportunities. Oversee on-page optimization (meta tags, site architecture, content, internal linking, etc.). Manage off-page SEO initiatives including backlink strategies, outreach, and partnerships. Work closely with content, design, and development teams to align SEO best practices. Monitor, analyze, and report on performance metrics (traffic, rankings, conversions, etc.) using tools like Google Analytics, Search Console, SEMrush, Ahrefs, etc. Perform technical SEO audits and work with developers to resolve technical issues (site speed, mobile-friendliness, indexing errors, etc.). Stay current with algorithm updates and emerging SEO trends. Manage SEO tools and platforms and recommend improvements or new tools. Required Qualifications: Bachelor’s degree in Marketing, Communications, IT, or a related field. 4+ years of hands-on SEO experience, with at least 1–2 years in a leadership or managerial role. Proven track record of increasing organic traffic and improving search rankings. Strong understanding of search engine algorithms and ranking factors. Proficiency in SEO tools (Google Analytics, Search Console, Ahrefs, SEMrush, Screaming Frog, etc.). Familiarity with HTML/CSS, CMS systems (e.g., WordPress), and basic web development concepts. Excellent analytical, communication, and project management skills. Additional Preferred Qualifications: Experience in international SEO or e-commerce SEO. Google certifications (Analytics, Ads) are a plus. Experience with A/B testing and CRO is an advantage. Soft Skills: Strategic thinker with strong decision-making capabilities. Team player with excellent leadership and interpersonal skills. Ability to manage multiple projects and deadlines in a fast-paced environment. Job Type: Full-time Pay: ₹35,000.00 - ₹85,000.00 per month Schedule: Day shift Experience: SEO: 3 years (Required) Team management: 3 years (Required) Location: Navi Mumbai, Maharashtra (Required) Work Location: In person
Posted 2 weeks ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potentiaL Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice. Work you will do Location: Gurugram, India Experience Required: 4-7 Years Notice Period - 30Days Max . We are seeking a skilled Data Modeler with 4-7 years of experience to design, develop, and maintain data models that drive business insights and operational efficiency. The ideal candidate will have hands-on experience in data modelling tools & techniques, SQL, database design, and data warehousing concepts. This role requires collaboration with stakeholders to ensure data integrity, performance optimisation, and seamless integration into business processes. Key Responsibilities: - Design and maintain conceptual, logical, and physical data models aligned with business needs. - Develop and optimize database structures with a focus on performance and scalability. - Implement data modelling best practices, including normalization and indexing. - Work closely with database administrators, developers, and business analysts. - Ensure compliance with data governance policies and security standards. - Provide insights into ETL processes and data warehousing strategies. - Support SQL query development, optimization, and troubleshooting. Required Skills & Qualifications: - Proven experience in data modelling for relational and non-relational databases. - Hands-on expertise in SQL, database design, and optimisation techniques. - Strong understanding of data warehousing concepts and ETL methodologies. - Familiarity with data governance, security, and compliance frameworks. - Excellent analytical and problem-solving skills. - Strong communication and collaboration abilities to work with diverse teams. Preferred Qualifications: - Experience in cloud-based data platforms (AWS, Azure, GCP). - Knowledge of Big Data frameworks (Hadoop, Spark, Snowflake). - Certification in data modelling, database management, or cloud platforms. Our purpose Deloitte is led by a purpose: To make an impact tha t matters. Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world. Show more Show less
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Kolkata, West Bengal
On-site
We are looking for a highly skilled Backend Developer with extensive experience in MongoDB and ExpressJS . The ideal candidate should have a deep understanding of MongoDB , including writing raw queries , optimizing performance, and managing large-scale databases. Additionally, they should have a strong grasp of ExpressJS for building scalable and efficient backend applications. Key Responsibilities: Design, develop, and maintain backend services using Node.js and ExpressJS . Write raw MongoDB queries for complex data retrieval and manipulation. Optimize database performance, indexing, and query execution. Develop and integrate RESTful APIs with frontend applications. Implement authentication and authorization mechanisms. Collaborate with frontend developers and other stakeholders to ensure seamless integration. Ensure high availability, scalability, and security of backend systems. Perform code reviews, debugging, and troubleshooting to maintain code quality. Stay updated with the latest trends and best practices in backend development. Required Skills: MongoDB: Deep expertise in database design, indexing, aggregation, and raw queries. ExpressJS: Strong understanding of middleware, routing, and API development. Node.js: Proficiency in asynchronous programming and event-driven architecture. RESTful APIs: Experience in designing and implementing scalable APIs. Authentication & Security: Knowledge of JWT, OAuth, and other security protocols. Version Control: Experience with Git and collaborative development workflows. Performance Optimization: Ability to optimize queries and backend processes for efficiency. Preferred Skills: Experience with Docker, Kubernetes , and cloud platforms like AWS/GCP . Familiarity with GraphQL and microservices architecture. Knowledge of CI/CD pipelines for automated deployments. Job Types: Full-time, Permanent Pay: ₹840,000.00 - ₹1,300,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Weekend availability Ability to commute/relocate: Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: Back-end development: 5 years (Required) MongoDB: 4 years (Required) Express.js: 4 years (Required) Node.js: 3 years (Required) APIs: 3 years (Required) Location: Kolkata, West Bengal (Preferred) Work Location: In person
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Junior Software Developer 8848 Digital LLP provides clients with high-quality ERPNext implementations, application customizations, management consulting, and a variety of technical infrastructure services. 8848 Digital offers enterprise resource planning (ERP) solutions to mid-market companies around the world, including custom solutions on web and mobile platforms. We are looking for a passionate and detail-oriented Junior Software Developer to join our team. The ideal candidate will have a solid foundation in SQL along with working knowledge of a programming language such as Python. This role involves working on database design, backend development, and ERPNext customizations to support robust, scalable applications. Duties and Responsibilities: Develop and optimize complex SQL queries, stored procedures, and database schemas. Design and manage database objects, ensuring data integrity and efficiency. Write and maintain backend scripts and logic using Python or a similar language. Implement ETL processes for data migration and transformation. Support ERPNext customizations including scripting, reporting, and module-level configurations. Monitor, troubleshoot, and optimize database and backend performance. Collaborate with internal stakeholders and clients to deliver technical solutions and reports. Maintain technical documentation for all development tasks. Conduct data research and build relevant reporting deliverables. Qualification and Requirements: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Information Technology, or a related field. months to 1 year of experience in SQL development and backend programming. Strong foundation in SQL and relational databases (MySQL, PostgreSQL, SQL Server, or Oracle). Working knowledge of at least one backend programming language such as Python (preferred), Node.js, or PHP. Understanding of query optimization, indexing, and stored procedures. Familiarity with ERP systems such as ERPNext or Odoo is a plus. Experience with basic ETL processes and database security best practices. Good understanding of API integration, backend logic, and data handling. Certifications in SQL, Python, or backend development (preferred but not mandatory). Strong analytical skills, attention to detail, and ability to work in collaborative environments. Excellent verbal and written communication skills. Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Experience: SQL: 1 year (Required) Python: 1 year (Required) Location: Mumbai, Maharashtra (Required) Work Location: In person
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The indexing job market in India has been growing steadily over the past few years, with an increasing demand for professionals skilled in data organization and management. Indexing roles are crucial in various industries such as publishing, research, and information technology. In this article, we will explore the job opportunities, salary ranges, career paths, related skills, and interview questions for indexing roles in India.
These cities are known for their vibrant job markets and have a high demand for indexing professionals.
The average salary range for indexing professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.
A typical career progression in indexing roles may include: - Indexing Associate - Indexing Specialist - Senior Indexing Analyst - Indexing Manager
With experience and additional certifications, professionals can move up the ladder to managerial roles within the indexing field.
In addition to indexing skills, professionals in this field are often expected to have knowledge of: - Data management - Information retrieval systems - Database management - Advanced Excel skills - Attention to detail
As you prepare for indexing roles in India, remember to showcase your expertise in data organization and management. Stay updated on industry trends and technologies to stand out in the competitive job market. With the right skills and preparation, you can confidently apply for indexing roles and advance your career in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2