Jobs
Interviews

152245 Python Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Senior Software Engineer TBO(www.tbo.com) TBO is a global platform that aims to simplify all buying and selling travel needs of travel partners across the world. The proprietary technology platform aims to simplify the demands of the complex world of global travel by seamlessly connecting the highly distributed travel buyers and travel suppliers at scale. The TBO journey began in 2006 with a simple goal – to address the evolving needs of travel buyers and suppliers, and what started off as a single product air ticketing company, has today become the leading B2A (Business to Agents) travel portal across the Americas, UK & Europe, Africa, Middle East, India, and Asia Pacific. Today, TBO’s product range from air, hotels, rail, holiday packages, car rentals, transfers, sightseeing, cruise, and cargo. Apart from these products, our proprietary platform relies heavily on AI/ML to offer unique listings and products, meeting specific requirements put forth by customers, thus increasing conversions. TBO’s approach has always been technology-first and we continue to invest on new innovations and new offerings to make travel easy and simple. TBO’s travel APIs are serving large travel ecosystems across the world while the modular architecture of the platform enables new travel products while expanding across new geographies. Why TBO: You will influence & contribute to “Building World Largest Technology Led Travel Distribution Network” for a $ 9 Trillion global travel business market. We are the emerging leaders in technology led end-to-end travel management, in the B2B space. Physical Presence in 47 countries with business in 110 countries. We are reputed for our-long lasting trusted relationships. We stand by our eco system of suppliers and buyers to service the end customer. An open & informal start-up environment which cares. What TBO offers to a Life Traveller in You: Enhance Your Leadership Acumen. Join the journey to create global scale and ‘World Best’. Challenge Yourself to do something path breaking. Be Empowered. The only thing to stop you will be your imagination. Post pandemic: travel space is likely to see significant growth. Witness and shape this space. It will be one exciting journey. As a fastest growing B2B platform our priority is purpose-building scalable systems. Adopting industry leading technologies to support best-in-class business capabilities for high performing and scalable solutions. Fast response to the evolving regulatory environment and helping to meet the firm's regulatory commitments by addressing internal and external commitments. Top Sights During Your Role Stay (Key Expectations): Strong programming experience with good grasp of scalability and performance 3+ years of experience building consumer facing platforms. Strong grasp of distributed systems and system design. High degree of ownership Ability to pick up and learn new programming languages and frameworks Strong problem solving ability and focus on innovation Prior experience with serverless technologies is desirable Exposure to languages/ framworks like GoLang, RUST ,Java,Python, Node Capacity to use SQL server with ease. Exposure to Redis / MongoDB/ PostgreSQL is desired. Should have worked with cloud platforms like AWS / Azure. A flair for creating well-presented software that is technically sound. Outstanding analytical, problem-solving, and communication skills. Excellent organizational and time management skills. Self-driven, flexible, and innovative.

Posted 14 hours ago

Apply

5.0 - 12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Role : Marketing Analytics Lead / Senior Marketing Analyst – US Banking Roles and Responsibilities: Lead and execute campaign analytics across multiple channels including Email, Direct Mail (DM), and Digital platforms to optimize marketing effectiveness. Conduct borrower, customer, and marketing strategy analytics within the BFSI (Banking, Financial Services, and Insurance) sector, focusing on data-driven decision making. Develop and maintain dashboards and reports using Tableau and SQL to deliver actionable insights to stakeholders. Serve as a client-facing lead, communicating complex analytical findings clearly and effectively to internal teams and external clients. Independently manage end-to-end analytics projects, leading teams of 4-7 analysts (applicable for Team Lead level roles). Adapt and thrive in a fast-paced, continuously evolving environment with shifting priorities. Candidate Profile: Master’s or Bachelor’s degree in Mathematics, Statistics, Economics, Computer Science, Engineering, Analytics, or related fields from top-tier universities. 5-12 years of experience in consulting or analytics delivery, with strong preference for backgrounds in finance, payments, or banking domains. Exceptional analytical skills with proven ability to research, analyze, and solve both routine and complex customer and business problems. Proficient hands-on experience in SQL for data extraction and manipulation is mandatory. Strong working knowledge of Tableau or other data visualization tools is preferred. Familiarity with Python for data analysis and automation is a plus. Excellent communication skills with experience in client-facing roles, capable of presenting insights to diverse audiences. Demonstrated leadership skills with the ability to mentor and guide a team (for leadership positions).

Posted 14 hours ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We’re Hiring – Advisor / Senior Advisor (Actuarial) | Mumbai Join a global leader in reinsurance brokerage and risk management, and be part of our actuarial team delivering high-quality insights to clients worldwide. Location: Mumbai Shift: 2:30 PM – 10:30 PM IST About the Role: As an Advisor / Senior Advisor – Actuarial, you will analyze data, build loss models, price reinsurance treaties, and support clients in optimizing their reinsurance strategies. You’ll collaborate with cross-functional teams, mentor colleagues, and contribute to innovative actuarial solutions for clients in global markets including the US, London, and Lloyd’s Syndicates. Key Responsibilities: Analyze data and trends to recommend optimal reinsurance strategies. Build comprehensive loss models and estimate treaty pricing. Perform reinsurance structure optimization and profitability analysis. Collaborate with cross-functional teams on product development and pricing strategies. Generate reports and presentations for decision-making. Support industry studies, benchmarking, and process innovation. Skills & Competencies: Strong analytical, problem-solving, and communication skills. Ability to manage multiple projects and meet deadlines. Proficiency in MS Excel; knowledge of VBA, SQL, Python, R, Power BI is a plus. Qualifications: Graduate/Postgraduate in Statistics, Mathematics, Economics, or Commerce. Member of a recognized actuarial institute (IFoA, IAI, etc.) with 3–6 actuarial exams cleared. 1–3 years of relevant actuarial experience. If you are detail-oriented, passionate about actuarial science, and eager to make an impact in the reinsurance industry, we’d love to hear from you.

Posted 14 hours ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Kochi, Kerala

On-site

Job Title: Web Application Developer (2–3 Years Experience) Location: Kochi, Kerala Employment Type: Full-Time About the Role We are looking for a skilled and detail-oriented Web Application Developer with 2–3 years of professional experience (excluding internships) to join our growing team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality web applications that deliver exceptional user experiences and meet business objectives. Key Responsibilities Design, develop, and maintain responsive, scalable, and secure web applications. Write clean, maintainable, and well-documented code following industry best practices. Integrate APIs, third-party services, and databases into applications. Troubleshoot, debug, and optimize applications for performance and security. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with emerging technologies, frameworks, and industry trends. Required Skills & Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience). Experience: 2–3 years of professional web application development experience (excluding internship period). Proficiency in HTML5, CSS3, JavaScript (ES6+) and at least one modern frontend framework (React, Angular, or Vue.js). Strong backend development skills using Node.js, PHP, Python, or Java . Experience working with relational and/or NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Familiarity with RESTful API development and integration. Understanding of version control systems (Git, GitHub/GitLab). Strong problem-solving and debugging skills. Knowledge of responsive design principles and cross-browser compatibility. Preferred Skills (Nice to Have) Experience with cloud platforms (AWS, Azure, or Google Cloud). Knowledge of CI/CD pipelines and DevOps practices. Exposure to testing frameworks (Jest, Mocha, PHPUnit, etc.). Familiarity with security best practices in web application development. Soft Skills Strong communication and teamwork skills. Ability to manage time effectively and meet deadlines. Attention to detail with a focus on delivering high-quality work. Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Paid sick time Work Location: In person

Posted 14 hours ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: Senior AI/ML Solutions Architect Country: India grafTotal Experience: 8+ years Relevant Experience: Minimum 6+ years Domain: Capital Markets Job Description We are seeking an experienced Senior AI/ML Solutions Architect with deep expertise in AWS AI/ML services, cloud-native architectures, and enterprise-scale AI solution development. The ideal candidate will have strong skills in designing and implementing RAG-based architectures , LLM integration , and multi-cloud GenAI solutions with a focus on scalability, security, and cost-effectiveness. Roles And Responsibilities Architect, design, and develop enterprise-scale AI/ML solutions across cloud platforms. Utilize AWS services such as Bedrock, SageMaker, Lambda, API Gateway, and EKS to deliver advanced AI capabilities. Implement RAG-based architectures, LLM integrations, and vector databases for GenAI solutions. Write high-quality code using Python, and manage cloud deployments with Terraform, Harness, and IaC best practices. Ensure cloud security using IAM policies, security frameworks, and compliance standards in hybrid and multi-cloud setups. Integrate AWS Bedrock with Azure OpenAI and hybrid AI gateway solutions. Monitor and optimize AI solutions using observability tools like CloudWatch, Grafana, or Datadog. Work in Agile/DevOps environments with CI/CD pipelines for rapid and reliable delivery. Collaborate with onsite teams across geographies to ensure alignment and timely delivery. Mandatory Skills AWS, AWS Bedrock, AI/ML frameworks SageMaker, Lambda, API Gateway, EKS Python, Terraform, Harness, IaC (Infrastructure as Code) Desired Skills CloudWatch, Grafana, Datadog Azure OpenAI, Hybrid AI Gateway Solutions DevOps practices Additional Details Work Location: Pune Phase 2 / Hyderabad SEZ / Chandigarh SEZ (No WFH) Mode of Interview: Microsoft Teams – Video Work Mode: WFO – 5 days per week Certification: AWS Certified Solutions Architect – Professional (Highly Preferred) Shift: Regular Day Shift Business Travel: No BGV: Required before onboarding (FADV) Skills: aws,bedrock,ml,ai,python,terrafrom,lambda,sagemaker,lac,grafna

Posted 14 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. Position Summary We are seeking a Staff Engineer – DevOps with 3-5 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton. Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads. Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation. Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog. Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning. Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability. Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation. Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags. Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. Basic Qualifications A bachelor’s or master’s degree in computer science, electronics engineering or a related field 3-5 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation. Strong expertise in CI/CD pipelines, version control (Git), and release automation. Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration. Proficiency in Terraform, Ansible for infrastructure automation. Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.). Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog. Strong scripting and automation skills in Python, Bash, or Go. Preferred Qualifications Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling. Exposure to serverless architectures and event-driven workflows. Contributions to open-source DevOps projects. ‍ Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

Posted 14 hours ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Position : WebLogic Administrator (2–4 Years Experience) Location : Coimbatore (On-site; Face-to-Face Interview Required) Reporting : Day 1 Onboarding (Post Offer) RTH : Yes Total Experience : 2–4 years Relevant Experience : Minimum 1 year Mandatory Skills WebLogic Administration Oracle Cloud Infrastructure (OCI) Good to Have Unix/Linux Administration Python WLST (WebLogic Scripting Tool) Shell Scripting Detailed Job Description Manage and monitor WebLogic Server environments to ensure high availability and optimal performance. Administer, configure, and maintain WebLogic Server and OCI environments. Perform system and application deployments, updates, and regular maintenance. Troubleshoot and resolve system/application issues promptly. Ensure security measures are implemented and maintained across WebLogic environments. Collaborate with development and operations teams for end-to-end application lifecycle support. Maintain detailed documentation for configuration changes, system processes, and procedures.

Posted 14 hours ago

Apply

2.0 - 4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Mandatory Skill Sets:: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Skill Sets: Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/Credentials Certification in Windows Admin/Azure DevOps/Kubernetes Years of experience required: 2 -4 Years Education qualification Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Devops, Linux Bash, Microsoft PowerShell, Python (Programming Language) Optional Skills Linux Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 14 hours ago

Apply

0 years

0 Lacs

Siliguri, West Bengal, India

On-site

Job Title: Automation Engineer Job Summary: We are seeking a skilled and detail-oriented Automation Engineer to design, develop, and implement automated systems and processes. The ideal candidate will have experience in programming, systems integration, and process optimization, with the ability to improve efficiency, quality, and safety through automation. --- Key Responsibilities: Design, program, and implement automated systems for manufacturing, production, or software environments. Develop PLC (Programmable Logic Controller) programs and HMI (Human-Machine Interface) systems. Collaborate with cross-functional teams to identify opportunities for automation. Troubleshoot and resolve issues in automated processes and systems. Conduct system tests and validation procedures. Maintain documentation for control systems, software, and hardware configurations. Ensure compliance with safety and industry standards in all automation solutions. Continuously analyze processes to identify improvements and cost-saving opportunities. Provide training and support to operators and maintenance personnel. Integrate and interface automation systems with enterprise systems (e.g., SCADA, MES, ERP). --- Required Qualifications: Bachelor’s degree in Electrical Engineering, Mechanical Engineering, Computer Science, or a related field. Proven experience in automation engineering, control systems, or related fields. Proficiency in PLC programming (e.g., Siemens, Allen-Bradley, Mitsubishi). Experience with HMI/SCADA software (e.g., Wonderware, Ignition, WinCC). Strong understanding of industrial networks and communication protocols (e.g., Modbus, Profibus, OPC). Knowledge of robotics, sensors, actuators, and motion control systems. Excellent problem-solving and analytical skills. Strong verbal and written communication skills. --- Preferred Qualifications: Experience with Python, C#, or other scripting languages. Familiarity with Industry 4.0 and IIoT technologies. Certifications in automation platforms (e.g., Siemens, Rockwell). Knowledge of safety standards (e.g., ISO, ANSI, NFPA 70E).

Posted 14 hours ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

On-site

We are looking for a skilled LLM / GenAI & Machine Learning Expert who can drive innovative AI/ML solutions and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with hands-on experience in both traditional Machine Learning and Generative AI, including experience with AWS SageMaker, LLMs, and related tools and frameworks. This role demands not only technical excellence but also strong communication skills and the ability to guide and mentor junior team members. Exposure to emerging concepts like multi-agent collaboration, Memory-Context-Planning (MCP), and agent-to-agent workflows is highly desirable. Key Responsibilities: ● Lead the design and development of GenAI/LLM and ML-based products and solutions. ● Work with AWS SageMaker for model training, deployment, and scaling of ML/GenAI workflows. ● Mentor and support junior engineers in implementing GenAI and ML solutions. ● Perform prompt engineering, fine-tuning, and Retrieval-Augmented Generation (RAG) implementations. ● Build custom LLM workflows using Python and frameworks such as LangChain, LlamaIndex, and Hugging Face Transformers. ● Apply advanced paradigms such as Memory-Context-Planning, agent-to-agent collaboration, and autonomous agent design. ● Research and prototype new ideas and stay current with evolving GenAI/ML trends. ● Collaborate cross-functionally with product, design, and engineering teams to align AI/LLM use cases with business goals. Required Skills ● 5+ years of total software development experience, with a strong foundation in Python. ● 2–3+ years of hands-on experience in GenAI and Machine Learning, including real-world implementation and deployment. ● At least 1 year of hands-on experience using AWS SageMaker for ML/LLM model development and deployment. ● Deep familiarity with LLMs such as GPT-4, Claude, Mistral, and LLaMA. ● Strong understanding of prompt engineering, fine-tuning, tokenization, and embedding-based search. ● Experience with tools and frameworks like Hugging Face, LangChain, OpenAI API, and vector databases (e.g., Pinecone, FAISS, Chroma). ● Exposure to agent frameworks and multi-agent orchestration tools. ● Excellent written and verbal communication skills. ● Proven ability to lead and mentor team members in both GenAI and ML domains. Perks & Benefits ● Competitive Compensation and Benefits ● Half Yearly Appraisals ● Friendly Environment ● Work-life Balance ● 5 days working ● Flexible office timings ● Employee-friendly leave policies

Posted 14 hours ago

Apply

10.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description When looking to buy a product, whether it is in a brick and mortar store or online, it can be hard enough to find one that not only has the characteristics you are looking for but is also at a price that you are willing to pay. It can also be especially frustrating when you finally find one, but it is out of stock. Likewise, brands and retailers can have a difficult time getting the visibility they need to ensure you have the most seamless experience as possible in selecting their product. We at Wiser believe that shoppers should have this seamless experience, and we want to do that by providing the brands and retailers the visibility they need to make that belief a reality. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze lots of structured and semi-structured data from lots of different places every day (whether it’s 20 million+ products from 500+ websites or data collected from over 300,000 brick and mortar stores across the country). We help our customers be more competitive by discovering interesting patterns in this data they can use to their advantage, while being uniquely positioned to be able to do this across both online and instore. We are looking for a lead-level software engineer to lead the charge on a team of like-minded individuals responsible for developing the data architecture that powers our data collection process and analytics platform. If you have a passion for optimization, scaling, and integration challenges, this may be the role for you. What You Will Do Think like our customers – you will work with product and engineering leaders to define data solutions that support customers’ business practices. Design/develop/extend our data pipeline services and architecture to implement your solutions – you will be collaborating on some of the most important and complex parts of our system that form the foundation for the business value our organization provides Foster team growth – provide mentorship to both junior team members and evangelizing expertise to those on others. Improve the quality of our solutions – help to build enduring trust within our organization and amongst our customers by ensuring high quality standards of the data we manage Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Technologies We Use Languages: SQL, Python Infrastructure: AWS, Docker, Kubernetes, Apache Airflow, Apache Spark, Apache Kafka, Terraform Databases: Snowflake, Trino/Starburst, Redshift, MongoDB, Postgres, MySQL Others: Tableau (as a business intelligence solution) Qualifications Bachelors/Master’s degree in Computer Science or relevant technical degree 10+ years of professional software engineering experience Strong proficiency with data languages such as Python and SQL Strong proficiency working with data processing technologies such as Spark, Flink, and Airflow Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Snowflake, etc.) Solid understanding of streaming solutions such as Kafka, Pulsar, Kinesis/Firehose, etc. Hands-on experience with Docker, Kubernetes, infrastructure as code using Terraform, and Kubernetes package management with Helm charts Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of columnar/row-oriented data structures (e.g. Parquet, ORC, Avro, etc.) Solid understanding of Apache Iceberg, or other open table formats Proven ability to transform raw unstructured/semi-structured data into structured data in accordance to business requirements Solid understanding of AWS, Linux and infrastructure concepts Proven ability to diagnose and address data abnormalities in systems Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Experience building data warehouses using conformed dimensional models Experience building data lakes and/or leveraging data lake solutions (e.g. Trino, Dremio, Druid, etc.) Experience working with business intelligence solutions (e.g. Tableau, etc.) Experience working with ML/Agentic AI pipelines (e.g. , Langchain, LlamaIndex, etc.) Understands Domain Driven Design concepts and accompanying Microservice Architecture Passion for data, analytics, or machine learning. Focus on value: shipping software that matters to the company and the customer Bonus Points Experience working with vector databases Experience working within a retail or ecommerce environment. Proficiency in other programming languages such as Scala, Java, Golang, etc. Experience working with Apache Arrow and/or other in-memory columnar data technologies Supervisory Responsibility Provide mentorship to team members on adopted patterns and best practices. Organize and lead agile ceremonies such as daily stand-ups, planning, etc Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics.

Posted 14 hours ago

Apply

2.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Location: Vadodara Company: Sharedpro Technology Pvt Ltd About the Role Sharedpro is seeking a skilled Backend Developer with strong expertise in Java or Python , and hands-on experience working with AI tools and frameworks. The ideal candidate will be responsible for building scalable backend systems, integrating AI-driven modules, and collaborating with cross-functional teams to deliver intelligent applications. Eligibility : Only candidates based in Vadodara or nearby locations will be considered. Key Responsibilities Design, develop, and maintain robust backend systems using Java or Python and related frameworks such as Spring Boot, Flask, or Django. Integrate with AI models, APIs, and tools to enable intelligent features within products. Build RESTful APIs and data pipelines to support ML/AI applications. Optimize performance, scalability, and security of backend infrastructure. Work closely with AI/ML teams to implement model inference, deployment, and monitoring workflows. Write clean, modular, and well-tested code following best practices. Collaborate with frontend developers, product managers, and data engineers. Required Skills & Experience Minimum 2 years of backend development experience with Java or Python. Hands-on experience with AI/ML tools, model integration, or inference engines (e.g., Hugging Face, OpenAI, TensorFlow, LangChain). Strong understanding of REST APIs, JSON, and asynchronous programming. Experience with relational or NoSQL databases (e.g., MySQL, MongoDB). Familiarity with Docker, Git, and CI/CD pipelines. Basic knowledge of cloud platforms such as AWS, GCP, or Azure. Preferred Qualifications Exposure to large language model APIs (e.g., OpenAI, Cohere, Claude). Knowledge of vector databases (e.g., Pinecone, Weaviate, FAISS). Understanding of microservices architecture and API gateway implementations.

Posted 14 hours ago

Apply

0 years

0 Lacs

Jamshedpur, Jharkhand, India

On-site

To be a successful MES (Manufacturing Execution System) developer, you need a blend of technical skills, including proficiency in programming languages like Java, C#, Python, and SQL, along with knowledge of MES software, ERP systems, and industrial automation systems like PLCs and SCADA. Here's a more detailed breakdown of the key skill sets: Technical Skills: Programming Languages: SQL: Proficiency in these languages is crucial for developing and customizing MES applications. Other Languages: Depending on the specific MES software, familiarity with other languages like XML, VBScript, or .NET might be required. MES Software & Systems: MES Software: Experience with specific MES platforms (e.g., Avea, GE, Emerson. Rockwell FactoryTalk Production Suite, SAP, etc.) is highly valued. ERP Systems: Understanding how MES integrates with ERP systems (e.g., SAP, Oracle) is essential. SCADA & PLCs: Knowledge of industrial automation systems, including SCADA (Supervisory Control and Data Acquisition) and PLCs (Programmable Logic Controllers), is crucial for understanding the manufacturing environment. Databases: SQL Databases: Strong SQL skills are needed for data management, querying, and reporting within the MES system. Other Databases: Familiarity with other database technologies might be beneficial depending on the MES platform. Scripting & Reporting: Scripting: Ability to write scripts for automating tasks and customizing the MES system. Report Generation: Experience with generating reports and dashboards to monitor manufacturing performance. API Integration & System Connectivity: APIs: Knowledge of APIs and their use for integrating MES with other systems. System Connectivity: Understanding how to connect MES with various devices and systems (e.g., sensors, PLCs, ERP). Troubleshooting & Debugging: Troubleshooting: Ability to diagnose and resolve issues within the MES system. Debugging: Experience with debugging MES software and applications. Industry Knowledge: Manufacturing Processes: Understanding of different manufacturing processes and industries (e.g., pharmaceutical, automotive, food & beverage). Industry Compliance Standards: Familiarity with industry-specific compliance standards (e.g., GMP for pharmaceuticals). Soft Skills: Communication: Strong communication skills are crucial for collaborating with engineers, operators, and other stakeholders. Problem-Solving: Ability to identify and solve complex problems within the manufacturing environment. Adaptability: Willingness to learn new technologies and adapt to changing requirements. Teamwork: Ability to work effectively in a team environment. Time Management: Ability to manage multiple tasks and projects effectively.

Posted 14 hours ago

Apply

4.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Job Title: AI Engineer – Developer Experience: 4 to 9 Years (Minimum 2+ Years in LLM/Agentic Systems) Location: Gurugram / Bangalore - Hybrid Role Overview We are seeking talented AI Engineers to lead the design, development, and delivery of LLM-powered, agentic AI solutions with robust Retrieval-Augmentation-Generation (RAG) pipelines and cutting-edge prompt engineering practices. You will work closely with cross-functional teams to build scalable, secure, and production-ready AI systems. Key Responsibilities Generative AI & RAG Development: Design, implement, and optimize RAG architectures (Retrieval, Augmentation, Generation) for production use cases. LLM API Integration: Integrate and fine-tune large language model APIs (OpenAI, FILxGPT, etc.) for scalable, low-latency services. Prompt Engineering: Craft, test, and refine high-impact prompts, chains, and templates to optimize LLM outputs. Agentic Workflow Orchestration: Architect AI agents that combine retrieval, LLM generation, and external tool integrations. End-to-End AI Pipelines: Build systems covering data ingestion, embedding/indexing, hybrid retrieval, grounded generation, and result delivery. Model Governance & Compliance: Implement model versioning, monitoring, bias detection, and security audits. Collaboration & Mentorship: Work with product teams, guide junior developers, and ensure AI best practices across projects. Required Qualifications Experience: 4–9 years in AI/ML engineering. Minimum 2 years building LLM or agentic AI systems in production. Technical Expertise: Strong knowledge of prompt engineering and RAG architectures. Proficiency in Python and vector databases (e.g., Pinecone, FAISS, Weaviate). Hands-on experience with cloud platforms (AWS preferred). Experience in containerization (Docker) and orchestration (Kubernetes). Familiarity with CI/CD pipelines for AI deployments. Soft Skills: Strong communication skills with the ability to explain complex AI concepts to diverse audiences. Leadership and mentorship experience for Senior Developer roles. Preferred Skills Experience with multi-modal AI systems (text, image, speech). Knowledge of LangChain, LlamaIndex, or similar frameworks. Exposure to MLOps and AI monitoring tools.

Posted 14 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our company: At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You’ll Do This role will require you to Design, develop, and maintain scalable and high-performing enterprise applications Innovate, develop and test with the help of AI tools and deliver to market at a rapid pace Write efficient, scalable, and clean code primarily in Java and Python Collaborate with cross-functional teams to define, design, and ship new features Ensure the availability, reliability, and performance of deployed applications Integrate with CI/CD pipelines to facilitate seamless deployment and development cycles Monitor and optimize application performance and troubleshoot issues as needed Evaluate, investigate, and tune/optimize the performance of the application Resolve customer incidents and provide support to Customer Support and Operations teams You will be successful on achieving measurable improvements in software performance and user satisfaction. Who You’ll Work With You will join a high performing engineering team with Emphasis on innovation, continuous learning, and open communication Strong focus on mutual respect and empowering team members Commitment to celebrating diverse perspectives and fostering professional growth This role is an Individual Contributor role closely working with team members, reports to Engineering Manager What Makes You a Qualified Candidate BTech/MTech in CSE/IT/related disciplines 3-5 years of relevant industry experience Expert level knowledge in SQL and any programming language Proficiency using Modern AI tools such as Copilot / Claude for accelerated development Working experience with Java and Python and REST API in Linux environments Must have worked in one or more public cloud environments – AWS, Azure or Google Cloud Excellent communication and teamwork skills What You’ll Bring You will be a preferred candidate if you have Experience with containerization (Docker) and orchestration tools (Kubernetes) Experience working with modern data engineering tools such as Airbyte, Airflow and dbt working knowledge of Agentic AI and MCP Working knowledge of Teradata database A proactive and solution-oriented mindset with a passion for technology and continuous learning An ability to work independently and take initiative while contributing to the team’s success Creativity and adaptability in a dynamic environment A strong sense of ownership, accountability, and a drive to make an impact Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.

Posted 14 hours ago

Apply

0.0 years

0 - 0 Lacs

Gandhidham, Gujarat

On-site

Mount Litera Zee School Gandhidham has a vacancy of Computer Teacher for primary standards. Eligible candidates must be well verse with programming languages like python, MySQL, C, C++, Java, HTML, etc. Job Type: Full-time Pay: ₹8,000.00 - ₹20,000.00 per month Location: Gandhidham, Gujarat (Preferred) Work Location: In person

Posted 14 hours ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibility: Design, build, and maintain robust Python applications and APIs using frameworks like Django, Flask, FastAPI, and leverage microservices or serverless architectures. Mentor and lead a team of Python developers—conducting code reviews, promoting clean coding standards, best practices, and ensuring overall code quality. Work closely with stakeholders to translate business requirements into scalable technical solutions. Oversee CI/CD pipelines, containerization (Docker, Kubernetes), DevOps tooling, and possibly cloud-native or serverless deployments (AWS, GCP, etc.). Eligibility: 8+ years in Python development, with at least some experience in a leadership or technical lead role. Strong proficiency in at least one major Python web framework (Django, Flask, FastAPI). Solid background in RESTful API design, microservices, and cloud-native/serverless architectures. Proven skills in CI/CD, containerization, and familiarity with cloud platforms (AWS preferred). Hands-on experience with both relational (PostgreSQL, MySQL) and NoSQL systems, including ORM tools. Strong Git workflows; comfortable with tools like Jira, Confluence. Excellent communication, team-building, decision-making, and problem-solving abilities.

Posted 14 hours ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview Here at ShyftLabs, we are searching for an experienced Data Scientist who can derive performance improvement and cost efficiency in our product through a deep understanding of the ML and infra system, and provide a data driven insight and scientific solution ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Research, design, and develop innovative generative AI models and applications Collaborate with cross-functional teams to identify opportunities for AI-driven solutions Train and fine-tune AI models on large datasets to achieve optimal performance Optimize AI models for deployment in production environments Stay up-to-date with the latest advancements in AI and machine learning Collaborate with data scientists and engineers to ensure data quality and accessibility Design, implement, and optimize machine learning algorithms for tasks like classification, prediction, and clustering Develop and maintain robust AI infrastructure Document technical designs, decisions, and processes, and communicate progress and results to stakeholders Work with cross-functional teams to integrate AI/ML models into production-level applications Basic Qualifications: Master's degree in a quantitative discipline or equivalent 5+ years minimum professional experience Distinctive problem-solving skills, good at articulating product questions, pulling data from large datasets, and using statistics to arrive at a recommendation Excellent verbal and written communication skills, with the ability to present information and analysis results effectively Ability to build positive relationships within ShyftLabs and with our stakeholders, and work effectively with cross-functional partners in a global company Statistics : must have strong knowledge and experience in experimental design, hypothesis testing, and various statistical analysis techniques such as regression or linear models Machine Learning : must have a deep understanding of ML algorithms (i.e., deep learning, random forest, gradient boosted trees, k-means clustering, etc.) and their development, validation, and evaluation Programming : experience with Python or other scripting languages and database language (e.g., SQL) or data manipulation (e.g., Pandas) We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 14 hours ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. About Factset FactSet Research Systems Inc. is a global provider of integrated financial information, analytical applications and industry-leading services for investment and corporate communities. As a publicly traded company (NYSE: FDS | NASDAQ: FDS) recently added to the S&P 500 index, FactSet delivers superior content, analytics, and flexible technology to help more than 162,000 users see and seize opportunity sooner. For over 40 years, the company has served financial professionals, which include portfolio managers, investment research professionals, investment bankers, risk and performance analysts, wealth advisors and corporate clients. FactSet gives our clients the edge to outperform with informed insights, workflow solutions across the portfolio lifecycle, and industry-leading support from dedicated specialists. PROCESS BRIEF The primary function of this position is to perform various tasks related to the production & distribution of Portfolio & Benchmark performance, risk statistics, characteristics, & reporting. The individual will work under minimal supervision. Candidate will be a key resource in process improvement & building strong relationships with client and offshore teams, internal product, and engineering teams. Serves as a first level escalation point for client inquiries regarding the nature of investment products and their portfolios along with inquiries related risk numbers. The role requires growing expertise in the fixed income risk domain including a solid understanding of portfolio and security level risk modeling techniques, the sources of tracking error, and how the data quality of model inputs (example terms and conditions, pricing) can impact the quality of the analytics and risk model results. Job Responsibilities & Eligibility Criteria Have a Master’s/Bachelor's degree or equivalent in Finance, Engineering, or similar fields. The candidate should have knowledge of all investment product types, including those used for fixed income mandates (e.g., structured products and credit derivatives). Minimum 1 - 3 years of experience Required. Basic understanding of concepts like VaR, Stress Testing, Back Testing etc. Understanding of portfolio performance calculations and performance reports associated with accounts and composites. Analytically inclined, with knowledge of programming (example Python) being an added advantage. Ability to perform financial/mathematical calculations (or analysis) using MS Excel or other relevant tools. Knowledge of financial instruments, markets & analytical understanding. Ability to work under pressure, perform multiple tasks in a fast-paced, team environment, organize & prioritize workflow. Flexible to work in rotational shift including US shift hours. Ready to work in a hybrid model. At FactSet, we celebrate diversity of thought, experience, and perspective. We are committed to disrupting bias and a transparent hiring process. All qualified applicants will be considered for employment regardless of race, color, ancestry, ethnicity, religion, sex, national origin, gender expression, sexual orientation, age, citizenship, marital status, disability, gender identity, family status or veteran status. FactSet participates in E-Verify. Returning from a break? We are here to support you! If you have taken time out of the workforce and are looking to return, we encourage you to apply and chat with our recruiters about our available support to help you relaunch your career. Company Overview FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 14 hours ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities • Build Gen AI-enabled solutions using online and offline LLMs, SLMs and TLMs tailored to domain-specific problems. • Deploy agentic AI workflows and use cases using frameworks like LangGraph, Crew AI etc. • Apply NLP, predictive modelling and optimization techniques to develop scalable machine learning solutions. • Integrate enterprise knowledge bases using Vector Databases and Retrieval Augmented Generation (RAG). • Apply advanced analytics to address complex challenges in Healthcare, BFSI and Manufacturing domains. • Deliver embedded analytics within business systems to drive real-time operational insights. Required Skills & Experience • 3–5 years of experience in applied Data Science or AI roles. • Experience working in any one of the following domains: BFSI, Healthcare/Health Sciences, Manufacturing or Utilities. • Proficiency in Python, with hands-on experience in libraries such as scikit-learn, TensorFlow • Practical experience with Gen AI (LLMs, RAG, vector databases), NLP and building scalable ML solutions. • Experience with time series forecasting, A/B testing, Bayesian methods and hypothesis testing. • Strong skills in working with structured and unstructured data, including advanced feature engineering. • Familiarity with analytics maturity models and the development of Analytics Centre of Excellence (CoE’s). • Exposure to cloud-based ML platforms like Azure ML, AWS SageMaker or Google Vertex AI. • Data visualization using Matplotlib, Seaborn, Plotly; experience with Power BI is a plus. What We Look for (Values & Behaviours) • AI-First Thinking – Passion for leveraging AI to solve business problems. • Data-Driven Mindset – Ability to extract meaningful insights from complex data. • Collaboration & Agility – Comfortable working in cross-functional teams with a fast-paced mindset. • Problem-Solving – Think beyond the obvious to unlock AI-driven opportunities. • Business Impact – Focus on measurable outcomes and real-world adoption of AI. • Continuous Learning – Stay updated with the latest AI trends, research and best practices.

Posted 14 hours ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description - SSE: Python Experience : 4+ years Key Responsibilities - Responsible for providing expertise in the software development life cycle, from concept, architecture, design, implementation, & testing. Leading & mentoring small-sized teams. Ensuring the code reviews & development best practices/processes to be followed. Be part of regular client communication. Estimate efforts, identify risks & provide technical support whenever needed. Ensures effective people management (performance reviews & feedback at a very minimal level) & task management for smooth execution. Demonstrates ability to multitask & re-prioritize responsibilities based on dynamic requirements. Key Skills - Good experience in software architecture, system design & development. Extensive development experience in JAVA / Python programming. Preferred good experience in JavaScript technologies. (e.g. React/Angular/Vue) Strong fundamentals in Object-Oriented Design and Data Structures. Expertise in using Python for developing scalable systems with messaging and task queue architectures. Experience in working with customers directly which includes initial requirement gathering, day-to-day technical discussions, technical demos, and project delivery. Experience in developing RESTful Web services using any framework. Experience with working on Agile Software development methodology. Experience with Linux programming, or expertise in the areas of Big Data and/or Data Analytics is a plus. Prior experience in leading/mentoring a team is preferred. Should possess excellent oral, written, problem-solving, and analytical skills. Must be able to succeed with minimal resources and supervision. Education: B.E, B.Tech, MCA, Diploma Computer/IT

Posted 14 hours ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking an experienced Data Scientist III to join our team. In this role, you will manage the entire data science lifecycle, including data acquisition, preparation, model development, deployment, and monitoring. You will apply advanced techniques in Python programming, machine learning (ML), deep learning (DL), and generative AI to address complex challenges and develop high-impact solutions. Your role will involve leading projects, collaborating with cross-functional teams, and driving data-driven decision-making across the organization. What your impact will look like here: Advanced Model Development: Design, develop, and implement sophisticated machine learning and deep learning models for applications such as customer churn prediction, recommendation systems, and advanced anomaly detection. Programming and Libraries: Utilize Python and its advanced libraries (e.g., TensorFlow, PyTorch, Scikit-learn) to build and optimize models. Generative AI Exploration: Leverage cutting-edge generative AI techniques to create innovative solutions for complex problems and enhance engagement strategies. Leadership and Collaboration: Lead cross-functional teams, including product managers, engineers, and data analysts, to translate business needs into effective data-driven solutions. Integration and Deployment: Oversee the integration of models into production systems, ensuring seamless deployment and performance. Pipeline Management: Develop and maintain robust data pipelines for ingestion, transformation, and model training. Performance Monitoring: Continuously monitor model performance, conduct evaluations, and implement improvements as necessary. Communication and Strategy: Communicate complex technical concepts clearly to stakeholders and contribute to strategic data science initiatives. Best Practices and Innovation: Drive the development of best practices, tools, and frameworks for data science and machine learning, staying abreast of industry advancements. You will love this job if you have: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, or a related quantitative field. 7-9 years of experience in data science, machine learning, or a relevant domain. Proficiency in Python programming and extensive experience with relevant libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Proven track record of delivering production-ready AI/ML solutions. Strong expertise in machine learning and deep learning algorithms and techniques. Experience with generative AI models and advanced data science techniques. Proficiency in data manipulation, analysis, and visualization using Python libraries (e.g., Pandas, NumPy, Matplotlib). Experience with cloud platforms and big data technologies (e.g., AWS). Excellent problem-solving, analytical, and leadership skills. Strong communication and collaboration skills, with the ability to explain complex technical concepts to diverse audiences. Qualifications - Preferred Experience with advanced techniques such as graph neural networks, reinforcement learning, or similar. Familiarity with agile development methodologies and DevOps practices. Experience with containerization technologies (e.g., Docker, Kubernetes). Contributions to open-source projects or published research in machine learning, deep learning, or generative AI. Experience working with government or public sector data. Proficiency with data visualization tools (e.g., Tableau, Power BI). The Team We area globally distributed workforce across the United States, Canada, United Kingdom, India, Armenia, Australia, and New Zealand. The Culture At Granicus, we are building a transparent, inclusive, and safe space for everyone who wants to be a part of our journey. A few culture highlights include – - Employee Resource Groups to encourage diverse voices - Coffee with Mark sessions – Our employees get to interact with our CEO on very important and sometimes difficult issues ranging from mental health to work life balance and current affairs. - Embracing diversity & fostering a culture of ideation, collaboration & meritocracy - We bring in special guests from time to time to discuss issues that impact our employee population The Company Serving the People Who Serve the People Granicus is driven by the excitement of building, implementing, and maintaining technology that is transforming the Govtech industry by bringing governments and its constituents together. We are on a mission to support our customers with meeting the needs of their communities and implementing our technology in ways that are equitable and inclusive. Granicus has consistently appeared on the GovTech 100 list over the past 5 years and has been recognized as the best companies to work on BuiltIn. Over the last 25 years, we have served 5,500 federal, state, and local government agencies and more than 300 million citizen subscribers power an unmatched Subscriber Network that use our digital solutions to make the world a better place. With comprehensive cloud-based solutions for communications, government website design, meeting and agenda management software, records management, and digital services, Granicus empowers stronger relationships between government and residents across the U.S., U.K., Australia, New Zealand, and Canada. By simplifying interactions with residents, while disseminating critical information, Granicus brings governments closer to the people they serve—driving meaningful change for communities around the globe. Want to know more? See more of what we do here . The Impact We are proud to serve dynamic organizations around the globe that use our digital solutions to make the world a better place — quite literally. We have so many powerful success stories that illustrate how our solutions are impacting the world. See more of our impact here . The Process - Assessment – Take a quick assessment. - Phone screen – Speak to one of our talented recruiters to ensure this could be a fit. - Hiring Manager/Panel interview – Talk to the hiring manager so they can learn more about you and you about Granicus. Meet more members on the team! Learn more and share more. - Reference checks – Provide 2 references so we can hear about your awesomeness. - Verbal offer – Let’s talk numbers, benefits, culture and answer any questions. - Written offer – Sign a formal letter and get excited because we sure are! Benefits at Granicus India Along with the challenges of the job, Granicus offers employees an attractive benefits package which includes – - Hospitalization Insurance Policy covering employees and their family members including parents - All employees are covered under Personal Accident Insurance & Term Life Insurance policy - All employees can avail annual health check facility - Eligible for reimbursement of telephone and internet expenses - Wellness Allowance to avail health club memberships and/or access to physical fitness centres - Wellbeing Wednesdays which includes 1x global Unplug Day and 2x No Meeting Days every quarter - Memberships for ‘meditation and mindfulness ‘ apps including on-demand mental health support 24/7 - Access to learning management system Say., LinkedIn Learning Premium account membership & many more - Access to Rewards & recognition portal and quarterly recognition program Security and Privacy Requirements - Responsible for Granicus information security by appropriately preserving the Confidentiality, Integrity, and Availability (CIA) of Granicus information assets in accordance with the company's information security program. - Responsible for ensuring the data privacy of our employees and customers, their data, as well as taking all required privacy training in a timely manner, in accordance with company policies. Granicus is committed to providing equal employment opportunities. All qualified applicants and employees will be considered for employment and advancement without regard to race, color, religion, creed, national origin, ancestry, sex, gender, gender identity, gender expression, physical or mental disability, age, genetic information, sexual or affectional orientation, marital status, status regarding public assistance, familial status, military or veteran status or any other status protected by applicable law.

Posted 14 hours ago

Apply

0 years

0 Lacs

India

Remote

Machine Learning Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship Application Deadline: 15th August 2025 About WebBoost Solutions by UM WebBoost Solutions by UM provides students and graduates with hands-on learning and career growth opportunities in machine learning and data science . Role Overview As a Machine Learning Intern , you’ll work on real-world projects , gaining practical experience in machine learning and data analysis . Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models for various applications. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn . ✅ Document findings and create reports to present insights. Requirements 🎓 Enrolled in or graduate of a relevant program (AI, ML, Data Science, Computer Science, or related field) 📊 Knowledge of machine learning concepts and algorithms . 🐍 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills . Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Practical machine learning experience . ✔ Internship Certificate & Letter of Recommendation . ✔ Build your portfolio with real-world projects . How to Apply 📩 Submit your application by 15th August 2025 with the subject: "Machine Learning Intern Application" . Equal Opportunity WebBoost Solutions by UM is an equal opportunity employer , welcoming candidates from all backgrounds .

Posted 14 hours ago

Apply

10.0 years

0 Lacs

India

Remote

Full-Stack Engineering Lead Location: Remote / Hybrid We’re looking for an experienced Full-Stack Engineering Lead to architect, build, and guide teams in delivering world-class web and cloud applications. This is a hands-on leadership role where you’ll own technical direction, mentor developers, and deliver products that make a real impact for our clients and in-house ventures. What you’ll bring: 10+ years in full-cycle software development 3+ years in a Tech Lead role Designed and developed at least two applications from scratch Expert in JavaScript, TypeScript, Node.js Strong with MongoDB (NoSQL), SQL Server, PostgreSQL AWS skills: Lambda, S3, RDS, API Gateway Front-end expertise: React, React Native Agile/Scrum experience with sprint planning & product feature management Proven experience managing developers, architecture, and code reviews Nice to have: AWS Certification Docker, Kubernetes, Microservices Python with Java or .NET background DevOps/SRE practices (CI/CD, automation, monitoring) Serverless development on AWS or Azure Functions Why Qubryx: Competitive pay with performance bonuses Onsite opportunities Paid time off Flexible remote work Continuous learning and on-the-job training

Posted 14 hours ago

Apply

10.0 years

0 Lacs

India

Remote

Required Skills: Lead exp, DevOps, OCI, AWS Key Job Responsibilities: Lead, design and develop build and deployment solutions for JavaScript, .NET, MuleSoft and ERP applications using enterprise-level automation tools Lead, research, design, and implement strategies for continuous integration and continuous deployment (CICD) and release management Use automation to provision and maintain Amazon Web Services Cloud infrastructure Use automation to provision and maintain Oracle Cloud Infrastructure resources Build pipelines to compile and deploy code to target systems Build pipelines to manage configurations on target systems Setup integration between DevOps tools like GitHub, TeamCity, Octopus Deploy, New Relic, JIRA, and ServiceNow to enable automated processes for issues and change request deployments Research, develop, and implement best practices/methodologies for infrastructure provisioning (including Infrastructure as Code), application scaling, and configuration management Engineer systems and tools to support the build, integration, and verification of complex software systems spanning multiple hardware platforms, mobile platforms, and cloud-based platforms and services Work with the Information Services delivery team to implement and maintain highly scalable build and release solutions, including continuous delivery, optimization, monitoring, release management, and support for all Driscoll’s IS systems Manage Driscoll’s GitHub source code repositories for internal projects and vendor-developed systems Contribute to development and implementation of business continuity and disaster recovery processes Job Requirements: Minimum of Bachelor’s Degree in Software Engineering, Computer Science, or equivalent 10+ years of experience in DevOps Engineering 5+ years of experience leading DevOps teams Extensive experience with the Software Development Lifecycle, Branching and versioning strategies to enable continuous integration/deployment Environment and configuration management Familiarity with the software testing lifecycle and testing frameworks and processes is a plus Experience in Oracle Fusion Cloud ERP deployments Experience developing and maintaining build and deployment processes and scripting Extensive experience working with GitHub, a cloud-based Source Code Management tool Extensive experience with: CI tools (TeamCity, Jenkins), Package deployment tools (Octopus Deploy), Configuration Management tools (Terraform, Ansible Extensive experience in cloud platforms such as: Amazon Web Services (EC2, S3, CloudFormation; Glue, DynamoDB, Redshift are all plusses), Oracle Cloud Infrastructure (Oracle Saas and PaaS offerings, governance, OCI Networking); Azure experience is a plus Experience with monitoring tools (New Relic, Graphana) Experience with code quality and security tools (Snyk, SonarQube) Experience with JIRA for issue tracking and Service Now for incident and change management Strong programming/scripting skills (Python, Powershell, Bash) Advanced English communication skills with all levels of organization is required (written, verbal, digital, formal presentations)

Posted 14 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies