Jobs
Interviews

159 Multiprocessing Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

hyderabad, telangana, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing... You'll be instrumental in architecting and building our Network System Performance data pipelines, leveraging hands-on experience with cutting-edge GCP services. This includes a thorough knowledge of various Big Data platforms and technologies, ensuring robust and scalable solutions capable of handling massive volumes of data. Working with a large dataset environment, solving complex technical problems with the assistance of other subject-matter specialists, and partnering with other teams to drive Verizon's initiatives. Responsible for targets on improving various KPIs through personalized data-driven recommendations. Identifying business problems and developing analytic techniques to provide solutions to external teams. Identifying opportunities, size potential gains, and present actionable plans. Driving multiple technical and data-related projects of varying scopes and complexities. Working with other business units within Verizon, vendors, and executives to represent and drive our organization's strategy. Using data visualization and other data-gathering methods to either provide internal strategic direction or guide decisions for other teams. Working directly with developers and product managers/stakeholder to conceptualize, build, test, and realize value-based solutions. Influencing and obtaining buy-in from external and internal stakeholders by effectively balancing the social system. Performing strategic roles both on technology and strategic management. Driving a culture of innovation: Championing a culture of innovation and drive. Encouraging the team to participate in experimentation, hackathons, coding events, and other organization-wide events and efforts. What were looking for... This role mandates continuous innovation and broad technical skills, adept with knowledge and getting an in-depth understanding of business priorities, market insights, and technology changes in the industry. You thrive in a fast-paced, dynamic software development environment. You are flexible, dependable, and work well in varying environments. Youll Need To Have Bachelors degree or four or more years of work experience. Six or more years of relevant work experience. Three or more years as a data scientist with exposure to full stack model development, deployment, evaluation, optimization, and scaling. Experience in Telecom Network Domain and Network OSS solutions. Knowledge of Telecom Network domain working. Four or more Hands-on Experience on below stack: Programming - Proficiency in Python, Java, and R relevant AI libraries/frameworks. Data Streaming - Apache Kafka/Confluent Kafka, Apache Spark/Flink/Beam, and Apache Pulsar. ETL (Extract, Transform, Load) Processes - Data cleansing, Aggregation & Enrichment. Cloud - GCP Services, GCP Dataflow, Cloud Spanner, GKE & BigQuery. AWS. Experience on programming skills - Knowledge of GPU/CPU architecture and distributed computing. Exposure to large-scale AI training, understanding of the compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing etc.) and related performance analysis and tuning. Exposure on GenAI/Agentic AI Models. Excellent communication skills, with the ability to craft compelling narratives from data. Even better if you have one or more of the following: Certification in data analytics (e.g., Certified Data Analyst). Build scalable machine learning models using TensorFlow, Keras, and PyTorch to drive insights and automation. Deep understanding of LLM architectures (e.g., Transformer, GPT, BERT, T5) and generative models. Experience in developing and deploying real-time AI models. Certification in data visualization tools (e.g., Tableau, Power BI). Prior experience with Generative AI techniques applied to Large Language Models and multimodal learning (Image, Video, Speech etc.). Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability, or any other legally protected characteristics. Locations - Hyderabad, India

Posted 2 days ago

Apply

2.0 years

1 - 3 Lacs

hyderābād

On-site

About this role: Wells Fargo is seeking a Software Engineer. In this role, you will: Participate in low to moderately complex initiatives and projects associated with the technology domain, including installation, upgrades, and deployment efforts Identify opportunities for service quality and availability improvements within the technology domain environment Design, code, test, debug, and document for low to moderately complex projects and programs associated with technology domain, including upgrades and deployments Review and analyze technical assignments or challenges that are related to low to medium risk deliverables and that require research, evaluation, and selection of alternative technology domains Present recommendations for resolving issues or may escalate issues as needed to meet established service level agreements Exercise some independent judgment while also developing understanding of given technology domain in reference to security and compliance requirements Provide information to technology colleagues, internal partners, and stakeholders Required Qualifications: 2+ years of software engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 2+ years of experience in developing enterprise web applications using React JS with Redux, Python Good expertise in HTML5, CSS3, JavaScript, ES6, Web Pack and other tools related to React Front End Development. 2+ years of experience in Python, Big Data, Hadoop and Object Oriented Programming, Rest APIs using python (Django Rest Framework). 2+ years of DHTML (Dynamic Hypertext Markup Language) experience Experience with Multiprocessing, Multithreading, asyncio modules and building asynchronous APIs. Good in data science python libraries like NumPy, Pandas, SciPy. Solid understanding of distributed computing. Thoroughly skilled in managing/debugging python api applications Strong analytical and problem-solving skills. Experience in source control using Git, etc. Knowledge and understanding of application analysis and tuning including: memory management, process or thread management, resource management Excellent verbal, written, and interpersonal communication skills Working knowledge of UI integration with the Restful APIs preferably in Python/Django. Good understanding of deployments using CI/CD pipeline with Jenkins, Maven, Git etc. Strong expertise in working with Node JS, WebPack and other tools related to React Front End Development. Strong communication skills to interact with stakeholders Agile/Scrum software development methodologies and processes Understanding Oracle or any other RDBMS Exposure Cloud Platform like GCP Understanding OCP (Openshift container platform) Job Expectations: Full Stack Developer (Python, ReactJS, Restful APIs) Posting End Date: 16 Sep 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 3 days ago

Apply

1.5 years

19 - 39 Lacs

noida

On-site

Job Summary: RACE Consulting is hiring a Backend Engineer for its client. The role involves building data ingestion workflows and integrations for multiple external tools and technologies to enable out-of-the-box data collection, observability, security, and ML insights. You will be responsible for developing solutions that expand product capabilities and integrations with global security and cloud providers. Responsibilities: Develop End-to-End API integrations using Python. Create technical documentation and end-user guides. Develop proprietary scripts for parsing events and logging. Create and maintain unit tests for developed artifacts. Ensure quality, relevance, and timely updates of the integration portfolio. Comply with coding standards, directives, and legal requirements. Collaborate with internal teams and external stakeholders (partners, suppliers, etc.). Detect and solve complex issues in integrations. Work with platform teams to enhance and build next-gen tools. Requirements: Bachelor’s degree in Computer Science or related fields (Engineering, Networking, Mathematics). 1.5+ years of experience coding in Python. 1+ years of experience with Linux, Docker, Kubernetes, CI/CD. Experience using web API development and testing tools (e.g., Postman). Proactive, problem-solving mindset with curiosity to innovate. Strong communication skills to work with teams/customers globally. Desired Skills: Knowledge of programming patterns & test-driven development. Knowledge of web API protocols. Experience with Python Unit Testing. Advanced Python (multiprocessing, multithreading). Hands-on with Git and CI/CD pipelines. Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Work Location: In person

Posted 3 days ago

Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

We’re Hiring!!! Software Engineer – Python & Machine Learning Location: Chennai (Work from Office) Notice Period: Immediate to 15 days Are you passionate about Python and eager to work on data-driven applications with machine learning workflows? Join our growing team and work on exciting projects where your skills will make a real impact! Required Skills & Qualifications ✔ 5+ years of experience in Python programming with strong knowledge of data structures and memory management ✔ Experience working with libraries such as Pandas, NumPy, and scikit-learn (sklearn) ✔ Good understanding of object-oriented programming concepts – classes, inheritance, polymorphism, encapsulation ✔ Experience in multiprocessing or multithreading for performance optimization ✔ Familiarity with feature engineering techniques and working with large datasets ✔ Understanding of machine learning workflows – classification, regression, clustering ✔ Awareness of challenges like overfitting, underfitting, and model validation ✔ Skilled in data manipulation using methods like drop_duplicates, transformations, and cleaning ✔ Experience writing unit tests and working in collaborative environments ✔ Excellent communication and problem-solving abilities Preferred Skills (Nice to Have) ⭐ Experience with ML frameworks like TensorFlow, Keras, or PyTorch ⭐ Familiarity with asyncio or task queues like Celery ⭐ Exposure to cloud platforms – AWS, GCP, or Azure ⭐ Experience building APIs using FastAPI or Flask ⭐ Knowledge of Git and continuous integration workflows 📩 Apply Now If you're ready to take your Python expertise to the next level and work with machine learning technologies, we’d love to hear from you! Drop your resume at careers@primesoftinc.com / anitha.u@primesoftinc.com or apply directly through LinkedIn hashtag#python hashtag#corepython hashtag#ML hashtag#Machinelearning hashtag#numpy hashtag#pandas hashtag#MLworkflow hashtag#oops

Posted 4 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Summary: The Client is seeking a highly skilled and motivated Sr. Software Developer with expertise in programming language Python. The ideal candidate will work on designing, developing, and maintaining software solutions that integrate seamlessly with our infrastructure while ensuring scalability, reliability, and efficiency Responsibilities: Design and develop core components, libraries, and reusable modules using advanced Python programming practices. Write highly performant, multi-threaded, and memory-efficient Python applications. Maintain and optimize legacy Python systems and refactor them for modern architectures. Experience with RESTful API development, data parsing, and service integration. Conduct peer code reviews, enforce code standards, and perform refactoring when needed. Write unit, integration, and functional tests using frameworks. Maintain detailed documentation of code, APIs, modules, and design decisions. Candidate should identify opportunities and address them with in-house/enterprise-level automation tools (Ansible, Terraform, Shell scripting, Python, Puppet). Requirements: Proven experience in software development with a focus on infrastructure systems. Strong programming experience with Python 3.x, including OOP, data structures, exception handling, multithreading/multiprocessing. Solid understanding of standard libraries, generators, decorators, and context managers. Familiarity with at least one web framework such as Flask, FastAPI, or Django. Proficient with Git, CI/CD pipelines, and unit/integration testing frameworks (e.g., Pytest). Strong Linux system administration skills. Strong technical/analytical and troubleshooting skills. Familiarity with distributed systems and microservices architecture. Experience with cloud platforms (AWS, Google Cloud, Azure) is a plus. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Experience in understanding DNS, Email, NTP, and SFTP at enterprise scale is a plus. Familiarity with virtualization technologies like OpenStack, Nutanix, and VMware. Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. Team Technical Stack: Python , core concepts (OOP, data structures, exception handling, multithreading/multiprocessing, RESTful API, Flask, FastAPI, or Django #AditiIndia # 25-22313

Posted 6 days ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

palayam, thiruvananthapuram, kerala

On-site

Required Skills & Qualifications: 3+ years of experience in Python development, specializing in AI-driven applications . Strong expertise in FastAPI for high-performance backend development. Experience working with LLMs (GPT, Llama, Claude, etc.) and AI model deployment. Hands-on experience with LangChain for AI-driven workflows. Experience with vector databases (FAISS, Pinecone, Weaviate, ChromaDB, etc.) . Knowledge of RESTful APIs, GraphQL, and authentication mechanisms . Familiarity with Hugging Face, OpenAI APIs, and fine-tuning LLMs . Experience in asynchronous programming, multiprocessing, and performance tuning . Strong problem-solving skills, debugging expertise, and experience in Agile/Scrum methodologies Key Responsibilities: AI Model Integration: Develop and integrate LLMs into the WayVida platform using LangChain . Backend Development: Design, develop, and optimize scalable FastAPI services for AI-driven applications. API Development & Optimization: Build and maintain high-performance APIs to support AI-based functionalities. Data Processing & Pipelines: Work with large-scale datasets for training and fine-tuning LLMs. Performance Optimization: Improve system efficiency, response times, and model inference speeds. Collaboration with AI & Product Teams: Work with data scientists and engineers to deploy AI solutions effectively. Security & Compliance: Implement best practices for secure API design, data privacy, and compliance . Testing & Code Quality: Ensure high-quality, maintainable, and well-documented code following CI/CD best practices. Job Types: Full-time, Permanent Pay: ₹11,258.64 - ₹58,033.36 per month Benefits: Cell phone reimbursement Health insurance Provident Fund Ability to commute/relocate: Palayam, Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Current Monthly salary? Expected Monthly salary? how early you can join? Experience: RESTful APIs: 3 years (Preferred) LLMs: 2 years (Preferred) Python: 3 years (Required) vector databases: 2 years (Preferred) FastAPI: 2 years (Required) LangChain: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

surat, gujarat

On-site

As a Python Developer with 1-2 years of experience, you will be responsible for designing and developing core modules of the AI Agent SDK in Python. Your role will include integrating and optimizing Speech-to-Text (STT), Language Model (LLM), and Text-to-Speech (TTS) pipelines to ensure real-time performance. You will work with APIs from various providers like OpenAI, Anthropic, Deepgram, AssemblyAI, Whisper, ElevenLabs, and others. Your key responsibilities will involve implementing efficient data structures and algorithms for streaming, concurrency, and low-latency AI interactions. Collaboration with frontend/mobile SDK teams (JS, React Native, Android, iOS) will be essential to ensure smooth integrations. Additionally, you will be tasked with building and maintaining unit tests, CI/CD pipelines, and documentation for SDK releases. Optimizing memory usage, error handling, and network performance for production-ready deployments will be part of your daily tasks. You will also be required to conduct research and experiments with the latest AI models, open-source tools, and SDK best practices to stay updated in the field. To excel in this role, you should have at least 1 year of experience in Python development with a strong focus on core concepts such as Object-Oriented Programming (OOP), asynchronous programming, multithreading, and multiprocessing. Hands-on experience with LLM APIs like OpenAI, Anthropic, and Llama is necessary. Previous experience with STT engines such as Whisper and TTS engines like ElevenLabs and Azure Speech is preferred. A solid understanding of WebSockets, gRPC, REST APIs, and real-time streaming is required. Proficiency in data handling, serialization (JSON, Protobuf), and message queues is expected. Familiarity with AI frameworks/libraries like PyTorch, Hugging Face Transformers, and LangChain would be beneficial. Experience in SDK development, packaging, and distribution, including PyPI, wheels, and versioning, is essential. Comfort working in Linux/macOS development environments is necessary, along with a good understanding of testing using tools like pytest, code quality, and performance profiling. Experience with Docker, Kubernetes, cloud deployments (AWS/GCP/Azure), and knowledge of WebRTC, audio codecs, or real-time communication protocols are considered nice-to-have skills.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

As a Python Developer, you will play a crucial role in the development and scaling of our algorithmic execution platform. Your primary responsibility will be to contribute to the building and enhancement of the platform using your expertise in Python programming. Key skills that will be beneficial for this role include proficiency in network programming with a focus on sockets, experience with multiprocessing to improve system performance, and familiarity with NumPy for efficient numerical computing tasks. Join our team and be part of a dynamic environment where you can showcase your Python development skills to drive the growth and success of our algorithmic execution platform.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Python Backend Developer (FastAPI) Company: Pradha Solutions Location: Bangalore (On-site) Job Type: Full-Time Experience: 3+ Years About Us: Pradha Solutions is a growing IT services and staff augmentation company committed to delivering high-quality, scalable technology solutions. As we expand our engineering team, we are looking for talented backend developers to work on impactful and performance-driven applications for our global clients. About the Role We are looking for a skilled Python Backend Developer with strong expertise in building scalable APIs using FastAPI or Flask. The ideal candidate has hands-on experience with asynchronous programming, multiprocessing, and integrating external APIs (including generative AI APIs). Key Responsibilities Design, develop, and maintain RESTful APIs using FastAPI / Flask. Implement asynchronous programming using asyncio, multithreading, and multiprocessing where required. Work with databases (SQL/NoSQL) using SQLAlchemy or similar ORMs. Integrate third-party APIs, including Generative AI APIs (OpenAI, Hugging Face, etc.). Optimize application performance, scalability, and reliability. Ensure proper unit testing, mocking, and CI/CD integration. Work with containerization tools (Docker, Kubernetes) for deployment. Collaborate with frontend, DevOps, and product teams to deliver end-to-end solutions. Required Skills Strong proficiency in Python (3.x). Experience with FastAPI or Flask for backend development. Solid understanding of asyncio, concurrency, and event-driven programming. Knowledge of multithreading vs multiprocessing and when to apply each. Strong database knowledge (PostgreSQL, MySQL, MongoDB). Hands-on experience with unit testing (pytest, unittest, mocking). Familiarity with Docker, Kubernetes, and CI/CD pipelines. Good problem-solving, debugging, and performance optimization skills.

Posted 1 week ago

Apply

3.0 years

3 - 4 Lacs

mohali

On-site

Position: Developer (Trading Bots & AWS Deployment) Location: Mohali (On-site / Hybrid) 8th Floor, Rich Robust, Cogneesol, Sector 75, Sahibzada Ajit Singh Nagar, Punjab – 160055 Employment Type: Full-time Experience: 3+ Years About the Role We are looking for an experienced Python Developer with expertise in algorithmic trading systems and AWS cloud deployment . The role requires hands-on experience in building, optimizing, and managing low-latency trading bots in a production-grade environment. If you have a strong technical background and passion for finance + technology , this is an exciting opportunity to shape scalable trading infrastructure. Key Responsibilities Develop, test, and optimize Python-based trading bots . Integrate with broker APIs (e.g., Angel One SmartAPI, Zerodha Kite, Interactive Brokers ). Deploy and manage systems on AWS (EC2, Lambda, S3, CloudWatch, IAM, etc.) . Optimize for low-latency execution in live market environments. Handle multithreading, multiprocessing, and concurrent execution in trading systems. Implement secure, fault-tolerant, and scalable architecture . Set up logging, monitoring, and failover mechanisms for reliable performance. Required Skills & Experience Strong proficiency in Python (focus on algo trading/financial systems). Hands-on experience with AWS cloud services . Experience in low-latency trading applications . Expertise in REST & WebSocket APIs integration. Understanding of Linux servers, networking, and monitoring tools . Solid knowledge of error handling, system recovery, and logging frameworks . Good to Have Prior experience with SmartAPI, Kite API, or Interactive Brokers API . Understanding of trading strategies (e.g., RSI, EMA, Supertrend ). Familiarity with Docker / Kubernetes for deployment. Knowledge of databases ( PostgreSQL, MongoDB, Redis ) for trade/state management. Job Types: Full-time, Permanent, Contractual / Temporary Pay: ₹25,000.00 - ₹40,000.00 per month Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 - 0 Lacs

mohali, punjab

On-site

Position: Developer (Trading Bots & AWS Deployment) Location: Mohali (On-site / Hybrid) 8th Floor, Rich Robust, Cogneesol, Sector 75, Sahibzada Ajit Singh Nagar, Punjab – 160055 Employment Type: Full-time Experience: 3+ Years About the Role We are looking for an experienced Python Developer with expertise in algorithmic trading systems and AWS cloud deployment . The role requires hands-on experience in building, optimizing, and managing low-latency trading bots in a production-grade environment. If you have a strong technical background and passion for finance + technology , this is an exciting opportunity to shape scalable trading infrastructure. Key Responsibilities Develop, test, and optimize Python-based trading bots . Integrate with broker APIs (e.g., Angel One SmartAPI, Zerodha Kite, Interactive Brokers ). Deploy and manage systems on AWS (EC2, Lambda, S3, CloudWatch, IAM, etc.) . Optimize for low-latency execution in live market environments. Handle multithreading, multiprocessing, and concurrent execution in trading systems. Implement secure, fault-tolerant, and scalable architecture . Set up logging, monitoring, and failover mechanisms for reliable performance. Required Skills & Experience Strong proficiency in Python (focus on algo trading/financial systems). Hands-on experience with AWS cloud services . Experience in low-latency trading applications . Expertise in REST & WebSocket APIs integration. Understanding of Linux servers, networking, and monitoring tools . Solid knowledge of error handling, system recovery, and logging frameworks . Good to Have Prior experience with SmartAPI, Kite API, or Interactive Brokers API . Understanding of trading strategies (e.g., RSI, EMA, Supertrend ). Familiarity with Docker / Kubernetes for deployment. Knowledge of databases ( PostgreSQL, MongoDB, Redis ) for trade/state management. Job Types: Full-time, Permanent, Contractual / Temporary Pay: ₹25,000.00 - ₹40,000.00 per month Work Location: In person

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

chennai

Work from Office

Position Description: Hands on experience in core Java, Spring, AWS Services and Micro Services development using Angular, REST and so on, Exposure and involved in Product development life cycle would be added benefit, standard methodologies, detailed understanding of the technology roadmap, advancement to design/development process and providing prod support on rotation basis. Having experience in Genesys is a plus Drive technical discussions, arbitrate and recommend optimal path forward in a room of highly opinionated engineers that may or may not agree with you. Use your experience and knowledge to influence better software design, promote proper software engineering and bug prevention strategies, testability and security Actively participate in the development process through writing and maintain application features and automated tests including unit tests, component tests, integration tests, functional tests, Support the team in maintaining CI/CD pipelines Collaborate with team members on improving team's test coverage, release velocity and production health Participate in application code and test code reviews with rest of the Scrum team Contribute to own entire features from concept to deployment working on cross-functional activities Contribute ideas to improve our products as well as develop your skills, learn new technologies and languages, and continue to learn The Expertise and Skills You Bring o You have excellent proficiency in engineering large complex systems o You have proficiency in multi processing and parallel computing o You have experience and expertise in profiling and performance turning software o You have proficiency in handling data both structured and unstructured data o Ability to drive mature delivery practices through automation o You have strong proficiency in system programming with java o You have proficiency implementing low latency programs o You have exposure to memory modelling, performance tuning JVM o You have expertise with streaming data handling through Topics, Websockets & Queues o You have the drive and ability to deliver software with a high degree of automation o You are proficient with version control systems and can handle development for multiple releases in parallel o You have the spirit and willingness to contribute to org level innovation o You have a learning mindset and are able to demonstrate versatility in addition to your specialization o You have strong proficiency in driving execution of high quality designs and implementations o You are able to influence and drive adoption of best tools for accelerated delivery o You should have the ability to work effectively with both partners and project team members o You know Agile methodologies or iterative development processes o You know Acceptance test-driven development a plus. o You have Ability to take ownership o You Coach team members and take accountability for the deliverables o You have Excellent collaboration and Interpersonal skills o You have Great attitude, being a mentor, team player and effective contributor o You have Focus on productivity o Experience in Financial Markets o Ability to quickly learn, adapt across the tech stack o Expertise working with public cloud environments Skills: Financial Services Mainframe

Posted 1 week ago

Apply

4.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Profile Description We’re seeking someone to join our team as AI Platform Engineering Specialist who will have strong hands-on experience building software platforms on any combination of the following platforms - Kubernetes, Cloud (AWS, Azure, and/or Google), API based development, REST framework, data engineering, and large-scale API Gateway environments etc. Knowledge of AIML and hands-on experience implementing solutions using Generative AI are also preferable. The candidate will have great communication skills, a team-based mentality and a strong passion for using AI to increase productivity as well as help generate new ideas for product & technical improvements. Enterprise_Technology Enterprise Technology & Services (ETS) delivers shared technology services for Morgan Stanley supporting all business applications and end users. ETS provides capabilities for all stages of Morgan Stanley’s software development lifecycle, enabling productive coding, functional and integration testing, application releases, and ongoing monitoring and support for over 3,000 production applications. ETS also delivers all workplace technologies (desktop, mobile, voice, video, productivity, intranet/internet) in integrated configurations that boost the personal productivity of employees. Application and end user functions are delivered on a scalable, secure, and reliable infrastructure composed of seamlessly integrated datacenter, network, compute, cloud, storage, and database functions. Architecture & Modernization Architecture & Modernization Drives development of the global firm strategy to define modern architectures and guardrails to reduce legacy debt, while partnering with app dev to accelerate the adoption of modern capabilities. Software Engineering This is a position that develops and maintains software solutions that support business needs. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What You’ll Do In The Role Develop tooling and self-service capabilities for deploying AI solutions for the firm leveraging Kubernetes/OpenShift, Python, authentication solutions, APIs, REST framework, etc. Develop Terraform modules and Cloud architecture to enable secure AI cloud service deployment and consumption at scale. Have a platform mindset and build common, reusable solutions to scale Generative AI use cases using pre-trained models as well as fine-tuned models. Leverage Kubernetes/OpenShift to develop modern containerized workloads. Integrate with capabilities such as large-scale vector stores for embeddings. Author best practices on the Generative AI ecosystem, when to use which tools, available models such as GPT, Llama, Hugging Face etc. and libraries such as Langchain. Analyze, investigate, and implement GenAI solutions focusing on Agentic Orchestration and Agent Builder frameworks. Author and publish architecture decision records to capture major design decisions and product selection for building Generative AI solutions. Inclusive of app authentication, service communication, state externalization, container layering strategy and immutability. Ensure AI platform are reliable, scalable, and operational; (e.g. blueprints for upgrade/release strategies (E.g. Blue/Green); logging/monitoring/metrics; automation of system management tasks) Participate in all team’s Agile/ Scrum ceremonies. Participate in team’s on call rotation in build/run team model What You’ll Bring To The Role At least 4 years’ relevant experience would generally be expected to find the skills required for this role Bachelor’s or Master’s degree in Computer Science or related field, or equivalent job experience 4 years of experience in software engineering, design and development Strong hands-on Application Development background in at least one prominent programming language, preferably Python Flask or FAST Api. Broad understanding of data engineering (SQL, NoSQL, Big Data, Kafka, Redis), data governance, data privacy and security. Experience in development, management, and deployment of Kubernetes workloads, preferably on OpenShift. Experience with designing, developing, and managing RESTful services for large-scale enterprise solutions. Experience deploying applications on Azure, AWS, and/or GCP using IaC (Terraform) Hands-on experience with multiprocessing, multithreading, asynchronous I/O, performance profiling in at least one prominent programming language, preferably python. Ability to articulate technical concepts effectively to diverse audiences. Excellent communication skills. Demonstrated ability to work effectively and collaboratively in a global organization, across time zones, and across organizations Demonstrated experience in DevOps, understanding of CI/CD (Jenkins) and GitOps. Knowledge of DevOps and Agile practices. Nice to have Practitioner of unit testing, performance testing and BDD/acceptance testing. Understanding of OAuth 2.0 protocol for secure authorization. Proficiency with Open Telemetry tools including Grafana, Loki, Prometheus, and Cortex. Good knowledge of Microservice based architecture, industry standards, for both public and private cloud. Good understanding of modern Application configuration techniques. Hands on experience with Cloud Application Deployment patterns like Blue/Green. Good understanding of State sharing between scalable cloud components (Kafka, dynamic distributed caching). Good knowledge of various DB engines (SQL, Redis, Kafka, etc) for cloud app storage. Experience building AI applications, preferably Generative AI and LLM based apps. Deep understanding of AI agents, Agentic Orchestration, Multi-Agent Workflow Automation, along with hands-on experience in Agent Builder frameworks such Lang Chain and Lang Graph. Experience working with Generative AI development, embeddings, fine tuning of Generative AI models. Understanding of ModelOps/ ML Ops/ LLM Op. Understanding of SRE techniques. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You have a job opportunity in Bangalore, KA, IN where you will require a minimum of 5 years of relative experience in Python. Your role will involve strong experience in Python core programming, along with expertise in Python Libraries such as Flask, Panda, multiprocessing, and multithreading. You should have hands-on experience in developing REST APIs and a strong understanding of SQL, including Snowflake DB for DDL, DML Operations, CTE, stored procedures, and storage integration. Additionally, experience with any No SQL DB and AWS cloud services like Lambda and S3 is expected. You should be familiar with unit test frameworks like pytest and have good knowledge of version control tools like GIT. In terms of Angular, you are expected to have sound knowledge in HTML and CSS, along with familiarity with UI layouts, SASS, bootstrap, and the CSS GRID/FLEXBOX. Proficiency in JavaScript and TypeScript is required, as well as knowledge of Redux and ES6. You should have in-depth understanding of Angular framework, its design patterns, and experience in building performant Angular applications. A passion for creating good design and usability, along with strong communication skills as a team player, are essential. Experience in debugging using tools like Chrome Developer Console, unit testing with tools like Jest, knowledge of Azure DevOps, and solid understanding of version control principles using GIT are also desired. Hands-on experience with web services/Restful API and knowledge or working experience in Agile Methodologies like scrum will be beneficial for this role.,

Posted 1 week ago

Apply

2.0 - 8.0 years

0 Lacs

karnataka

On-site

The Team Lead, Software Engineer works within the software development team at Verint, collaborating with members of the development organization, QE, and Tier 3 Support. Your primary responsibility will involve designing, developing, and implementing server-side software systems. You will closely work with management on departmental issues, exercising independent judgment and decision-making within established guidelines. As the Team Lead, Software Engineer, your key duties and responsibilities will include supervising teams within the software development function. You will be accountable for specific product areas within Verint's Enterprise product suite, developing and executing software development plans, processes, and procedures, ensuring their adherence. Additionally, you will lead in team activities such as requirements gathering & analysis, design/architecture/implementation/testing, and related reviews. Collaboration with other departments to prioritize software development needs, including designing, developing, documenting, and testing new and existing software, will be part of your role. You may also serve in a scrum master role as part of the agile software development process, ensuring the team meets agreed timelines, milestones, reliability & performance, and quality measures. It will also be your responsibility to evaluate results with stakeholders to assess if organizational objectives are being achieved. Analyzing and resolving software development issues and needs throughout the software's full life cycle, performing root cause analysis, and acting as a point of contact and escalation for the team/function will be crucial aspects of your role. The minimum requirements for this position include a BS in Computer Science, Software Engineering, or a related field, along with 8+ years of software development experience and at least 2 years of Team Lead experience. You should have strong proficiency in Java server-side programming, experience in designing and building fault-tolerant, highly-available, distributed systems, and familiarity with standard concepts, practices, and procedures within software design and development. Additionally, you should possess experience in Object-Oriented analysis and design, strong troubleshooting & debugging capabilities in an agile software development team environment, and excellent organizational skills to manage multiple priorities and parallel projects effectively. Preferred qualifications for this role include an advanced degree, experience with CTI (Computer Telephony Integration) and telephony systems, familiarity with private/public cloud platforms such as AWS, Azure, or GCP, the ability to prioritize and delegate tasks across team boundaries and/or geographically dispersed teams, excellent organization, time management, and project leadership skills, as well as outstanding written and verbal communication abilities. You should also be able to adhere to strict deliverable deadlines while effectively multitasking.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

pune, maharashtra, india

On-site

About Position: We are hiring for Lead Python engineer with hands on experience with Apache Beam, databricks etc., Role: Lead Python Platform Engineer – Architecture & Performance Location: All PSL Locations Experience: 3 to 7 Years Job Type: Full Time Employment What You'll Do: Architecture and reuse: Design and build a shared component library/SDK for pipelines: ingestion, parsing/OCR, extraction (RegEx now, LLM/SLM later), validation, enrichment, publishing. Define patterns/templates for Apache Beam pipelines and Databricks jobs; standardize configuration, packaging, versioning, CI/CD, and documentation. Create pluggable interfaces so multiple teams can swap extractors (Regex/LLM), OCR providers, and EMR publishers without code rewrites. Define repo strategy - shared/child repos for each use case Performance and reliability: Own end-to-end profiling and tuning: cProfile/py-spy/line_profiler, memory (tracemalloc), CPU vs I/O analysis. Instrument services with Elastic APM and correlate traces/metrics with Splunk logs; build dashboards and runbooks. Implement concurrency best practices: asyncio for I/O-bound, ThreadPool/ProcessPool for CPU-bound, batching, rate limiting, retries, etc. Implement robust LLM API rate limiting/governance: enforce provider TPM and concurrency caps, request queueing/token budgeting, and emit APM/Splunk metrics (throttle rate, queue depth, cost per job) with alerts. Establish SLOs/alerts for throughput, latency, error rates; set up DLQs and recovery patterns. Team enablement: Mentor devs, lead design reviews, codify best practices, write clear docs and examples. Partner with ML engineers on the future LLM/SLM path (evaluation harness, safety/PII, cost/perf). Expertise You'll Bring: 7+ years Python with strong depth in performance and concurrency (asyncio, concurrent.futures, multiprocessing), profiling and memory tuning. Observability expertise: Elastic APM instrumentation and dashboarding; Splunk for logs and correlation; OpenTelemetry familiarity. Must have implemented LLM based solutions and supported them in production API engineering for high-throughput integrations (REST, OAuth2), resilience patterns, and secure handling of sensitive data. Strong architecture/design skills: clean interfaces, packaging shared libs, versioning, CI/CD (GitHub Actions/Azure DevOps), testing. 3+ years building large-scale data pipelines with Apache Beam and/or Spark, including hands-on Databricks experience (Jobs, Delta Lake, cluster tuning). Document processing: OCR (Tesseract, AWS Textract, Azure Form Recognizer), PDF parsing, text normalization. LLM/SLM integration experience (e.g., OpenAI/Azure AI, local SLMs), prompt/eval frameworks, PII redaction/guardrails. Cloud and tooling: AWS/Azure/GCP, Dataflow/Flink, Terraform, Docker; cost/performance tuning on Databricks. Security/compliance mindset (HIPAA), secrets management, least-privilege access. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let’s unleash your full potential at Persistent - persistent.com/careers “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 week ago

Apply

0 years

0 Lacs

serilingampalli, telangana, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing… As Sr. Engineer Consultant - AI Science you will will play a leading role in building, deploying and managing end to end AI Services to power traditional and generative AI use cases. You'll Need To Have Bachelors degree or four or more years of work experience. Four or more year of relevant work experience. Four or more years as data scientist with exposure to full stack model development, deployment, evaluation, optimization and scaling. Experience on programming skills - proficiency in Python, PySpark, Java, C++ and R relevant AI libraries/frameworks Understanding of SOTA algorithms, especially in personalization, cognitive and generative models. Must have good understanding and ability to explain both the code and the underlying math used in algorithms/models. Familiarity with multi modal data, vector and graph databases and data warehousing fundamentals Experience with cloud platforms like GCP, AWS and their respective AI services Knowledge of GPU/CPU architecture and distributed computing Understanding of containerization (Docker) orchestration (Kubernetes) and CI/CD pipelines Exposure to large-scale AI training, understanding of the compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing etc.) and related performance analysis and tuning Ability to synthesize and analyze data to answer business questions and design , deploy and monitor models wrt technical and functional metrics and report to stakeholders accordingly AI evangelist with research interests as well as strong history of delivering AI solutions that address business priorities Ability to communicate complex model designs and outcomes in business terms to a non-technical audience. Even better if you have one or more of the following: Advanced degree in computer science, Mathematics, Data science or similar field Experience in developing and deploying real time AI models Prior experience with Generative AI techniques applied to Large Language Models And multimodal learning (Image, Video, Speech etc.). Repository of innovative AI research and applications in Github, scientific publications and patents. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 week ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing… As Sr. Engineer Consultant - AI Science you will will play a leading role in building, deploying and managing end to end AI Services to power traditional and generative AI use cases. You'll Need To Have Bachelors degree or four or more years of work experience. Four or more year of relevant work experience. Four or more years as data scientist with exposure to full stack model development, deployment, evaluation, optimization and scaling. Experience on programming skills - proficiency in Python, PySpark, Java, C++ and R relevant AI libraries/frameworks Understanding of SOTA algorithms, especially in personalization, cognitive and generative models. Must have good understanding and ability to explain both the code and the underlying math used in algorithms/models. Familiarity with multi modal data, vector and graph databases and data warehousing fundamentals Experience with cloud platforms like GCP, AWS and their respective AI services Knowledge of GPU/CPU architecture and distributed computing Understanding of containerization (Docker) orchestration (Kubernetes) and CI/CD pipelines Exposure to large-scale AI training, understanding of the compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing etc.) and related performance analysis and tuning Ability to synthesize and analyze data to answer business questions and design , deploy and monitor models wrt technical and functional metrics and report to stakeholders accordingly AI evangelist with research interests as well as strong history of delivering AI solutions that address business priorities Ability to communicate complex model designs and outcomes in business terms to a non-technical audience. Even better if you have one or more of the following: Advanced degree in computer science, Mathematics, Data science or similar field Experience in developing and deploying real time AI models Prior experience with Generative AI techniques applied to Large Language Models And multimodal learning (Image, Video, Speech etc.). Repository of innovative AI research and applications in Github, scientific publications and patents. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 week ago

Apply

7.0 years

0 Lacs

pune, maharashtra, india

On-site

About Position: We are looking for an experienced and talented Python Architect to join our growing data competency team. The ideal candidate will have a strong background in working with Lead Python Platform, Apache Beam + Databricks platform, parsing/OCR, Validation, Implementation, Cloud (AWS/Azure/GCP). We have built the core features, but need a senior engineer to architect. Role: Python Architect Location: All Persistent Locations Experience: 7 to 14 Years Job Type: Full Time Employment What You'll Do: Architecture and reuse: Design and build a shared component library/SDK for pipelines: ingestion, parsing/OCR, extraction (RegEx now, LLM/SLM later), validation, enrichment, publishing. Define patterns/templates for Apache Beam pipelines and Databricks jobs; standardize configuration, packaging, versioning, CI/CD, and documentation. Create pluggable interfaces so multiple teams can swap extractors (Regex/LLM), OCR providers, and EMR publishers without code rewrites. Define repo strategy - shared/child repos for each use case Performance and reliability Own end-to-end profiling and tuning: cProfile/py-spy/line_profiler, memory (tracemalloc), CPU vs I/O analysis. Instrument services with Elastic APM and correlate traces/metrics with Splunk logs; build dashboards and runbooks. Implement concurrency best practices: asyncio for I/O-bound, ThreadPool/ProcessPool for CPU-bound, batching, rate limiting, retries, etc. Implement robust LLM API rate limiting/governance: enforce provider TPM and concurrency caps, request queueing/token budgeting, and emit APM/Splunk metrics (throttle rate, queue depth, cost per job) with alerts. Establish SLOs/alerts for throughput, latency, error rates; set up DLQs and recovery patterns. Team Enablement Mentor devs, lead design reviews, codify best practices, write clear docs and examples. Partner with ML engineers on the future LLM/SLM path (evaluation harness, safety/PII, cost/perf). Expertise You'll Bring: 7+ years Python with strong depth in performance and concurrency (asyncio, concurrent.futures, multiprocessing), profiling and memory tuning. Observability expertise: Elastic APM instrumentation and dashboarding; Splunk for logs and correlation; OpenTelemetry familiarity. Must have implemented LLM based solutions and supported them in production API engineering for high-throughput integrations (REST, OAuth2), resilience patterns, and secure handling of sensitive data. Strong architecture/design skills: clean interfaces, packaging shared libs, versioning, CI/CD (GitHub Actions/Azure DevOps), testing. 3+ years building large-scale data pipelines with Apache Beam and/or Spark, including hands-on Databricks experience (Jobs, Delta Lake, cluster tuning). Document processing: OCR (Tesseract, AWS Textract, Azure Form Recognizer), PDF parsing, text normalization. LLM/SLM integration experience (e.g., OpenAI/Azure AI, local SLMs), prompt/eval frameworks, PII redaction/guardrails. Cloud and tooling: AWS/Azure/GCP, Dataflow/Flink, Terraform, Docker; cost/performance tuning on Databricks. Security/compliance mindset (HIPAA), secrets management, least-privilege access. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let’s unleash your full potential at Persistent - persistent.com/careers “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 week ago

Apply

2.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Title: Senior Python Developer – Web Scraping & Automation Company : Actowiz Solutions Location: Ahmedabad Job Type : Full-time Working Days : 5 Days a Week About Us Actowiz Solutions is a leading provider of data extraction, web scraping, and automation solutions. We empower businesses with actionable insights by delivering clean, structured, and scalable data through cutting-edge technology. Join our fast-growing team and lead projects that shape the future of data intelligence. Role Overview We are seeking an experienced Senior Python Developer with proven expertise in Scrapy (must-have) and strong skills in web scraping and automation. The ideal candidate will design, develop, and optimize large-scale scraping solutions that power data-driven decision-making. Key Responsibilities • Design, develop, and maintain scalable web scraping frameworks using Scrapy (mandatory). • Work with additional libraries/tools such as BeautifulSoup, Selenium, Playwright, Requests, etc. • Implement robust error handling, data parsing, and data storage mechanisms (JSON, CSV, SQL/NoSQL databases). • Build and optimize asynchronous scraping workflows and handle multithreading/multiprocessing. • Collaborate with product managers, QA, and DevOps teams to ensure timely delivery. • Research and adopt new scraping technologies to improve performance, scalability, and efficiency. Requirements • 2+ years of experience in Python development with Scrapy expertise (must-have). • Proficiency with automation libraries such as Playwright or Selenium. • Experience with REST APIs, asynchronous programming, and concurrency. • Familiarity with databases (SQL/NoSQL) and cloud-based data pipelines. • Strong problem-solving skills and ability to meet deadlines in an Agile environment. Preferred Qualifications • Knowledge of DevOps tools such as Docker, GitHub Actions, or CI/CD pipelines. Benefits • Competitive salary. • 5-day work week (Monday–Friday). • Flexible and collaborative work environment. • Ample opportunities for career growth and skill development.

Posted 1 week ago

Apply

4.0 years

0 Lacs

pune, maharashtra, india

On-site

Company Description Unipart Group is a leading provider of manufacturing, logistics and consultancy services. The Group provides a wide range of business support services to a variety of sectors, such as automotive, healthcare and telecommunications. Key Responsibilities ● Engage with business stakeholders to gather and understand business requirements ● Develop, backend and frontend components for analytics and AI applications ● Develop data pipelines that extract, transform and load data from source systems into a data lake/data warehouse ● Ensure code quality through code reviews, unit testing, integration testing, and continuous integration/continuous deployment (CI/CD) practices ● Participate in the architecture and design of data and software solutions that support analytics and AI initiatives ● Document technical solutions for cross-functional teams and long-term maintainability ● Troubleshoot and resolve production issues across data and application layers ● Present results to business stakeholders involved in data, analytics, and AI projects Role Description Skills and Qualifications ● Essential ○ Excellent knowledge of Python and object-oriented programming ○ Demonstrable ability to build fullstack applications using Javascript (React and Next.js are a plus) ○ Demonstrable experience building ETL pipelines ○ Self-starter, pragmatic, able to execute with minimal supervision ● Desirable ○ Experience building frontend applications using Web Components ○ Basic understanding of ML/AI and willingness to learn ○ Basic understanding of simulation and discrete optimisation techniques is a plus ○ AI/ML Certification ○ Bachelors or masters in - data science, AI Experience ● Essential ○ Previous experience working in client facing roles ○ Proven ability to deliver data science or numerical computing projects, ideally end-to-end including deployment and maintenance ○ 4+ years of experience working with numpy, pandas, multiprocessing and asynchronous programming ○ 4+ years of experience working with relational database management systems ○ Ability to understand how business needs steer the development of a product ○ 4+ years of experience working with Linux environments for development and deployment ○ 4+ years of experience working with Git and collaborative development workflows ● Desirable ○ Proven ability to work in cloud environments (AWS and GCP preferred) ○ Previous experience building CI/CD pipelines is a plus ○ Experience with managing multiple stakeholders in a matrix environment

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

hyderabad, telangana, india

Remote

We are a global team of innovators and pioneers dedicated to shaping the future of observability. At New Relic, we build an intelligent platform that empowers companies to thrive in an AI-first world by giving them unparalleled insight into their complex systems. As we continue to expand our global footprint, we&aposre looking for passionate people to join our mission. If you&aposre ready to help the world&aposs best companies optimize their digital applications, we invite you to explore a career with us! Your Opportunity New Relic is a leader in Observability industry and has been on the forefront of developing cutting edge AI/ML solutions. We are seeking an experienced and dynamic Backend Engineer (Python) to join our AI/ML team. You will develop scalable web services and APIs using Python and its extended ecosystem for our Agentic AI Platform which will be the nucleus of AI workflows at New Relic. Your responsibilities will include ideating, implementing and owning the low level design of the service, monitoring the service in production environment and innovating and optimizing the functioning of service from time-to-time. Any experience with ML techniques can come in handy for the role but is not a pre-requisite. These are exciting times for New Relic to make a significant impact on AI led Observability and even more exciting for engineers in AI team to contribute to that journey. What will you do Engineer well-designed, scalable, and resilient microservices in modern technologies. Deliver high-quality, performant software with an emphasis on scalability and reliability. Build thoughtful, high-quality code that is easy to read and maintain Collaborate with your team, external contributors, and others to help solve problems. Write and share proposals to improve team processes and approaches. This role requires 5+ years of experience as a Python Backend Engineer, developing production grade applications. Proficiency in back-end frameworks such as Django, Flask, or FastAPI. Expertise in Pydantic for data validation, type checking and construct robust models ensuring data integrity Strong knowledge of Python&aposs asyncio library and hands-on experience with asynchronous request handling. Familiarity with async libraries such as aiohttp or httpx. Competency in using Python threading and multiprocessing modules for parallel task execution. Knowledge of Coroutines. Understand the Global Interpreter Lock (GIL) and its implications on concurrency. Proficient in creating and consuming decorators for code reuse and abstraction. Skilled in designing and utilizing iterators and generators to manage data streams efficiently. Experience with testing frameworks like PyTest or Unit test to ensure code quality and reliability. Strong debugging skills in distributed systems. Proficient in using Git for version control and experience with CI/CD pipelines using tools like Jenkins or GitLab CI. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Strong knowledge of fundamental data structures such as lists, sets, dictionaries, and trees. Ability to implement and optimize algorithms for problem-solving and performance tuning. Bonus points Masters in Computer Science discipline Any exposure to Machine Learning and GenAI technologies Familiarity with message broker systems (e.g., Kafka, RabbitMQ) Familiarity with Postgres or similar RDBMS Experience with ML workflow management, like AirFlow, Sagemaker, etc. Experience with ORM libraries like SQLAlchemy and data serialization libraries like Marshmallow Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics different backgrounds and abilities, and recognize the different paths they took to reach us including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. Were looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to [HIDDEN TEXT]. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. New Relic develops and distributes encryption software and technology that complies with U.S. export controls and licensing requirements. Certain New Relic roles require candidates to pass an export compliance assessment as a condition of employment in any global location. If relevant, we will provide more information later in the application process. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy Show more Show less

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

gurugram, haryana, india

On-site

We&aposre looking for a python dev to help us build and scale up our algorithmic execution platform. Some skills that&aposll help - network programming (sockets), multiprocessing, numpy Show more Show less

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You have a Full-time, On-site Job opportunity available in Thiruvananthapuram, Kerala that requires a minimum of 6 years of experience in Python development with a specialization in AI-driven applications. Your role will involve utilizing your strong expertise in FastAPI for high-performance backend development. Additionally, you should have experience working with LLMs (GPT, Llama, Claude, etc.) and AI model deployment. Hands-on experience with LangChain for AI-driven workflows is essential, as well as familiarity with vector databases such as FAISS, Pine cone, Weaviate, ChromaDB, etc. Knowledge of RESTful APIs, GraphQL, and authentication mechanisms is required for this position. You should also be familiar with Hugging Face, OpenAI APIs, and fine-tuning LLMs. Experience in asynchronous programming, multiprocessing, and performance tuning will be beneficial. Strong problem-solving skills, debugging expertise, and experience in Agile/Scrum methodologies are also valuable assets. Your key responsibilities will include developing and integrating LLMs into the WayVida platform using LangChain, designing, developing, and optimizing scalable FastAPI services for AI-driven applications, as well as building and maintaining high-performance APIs to support AI-based functionalities. You will work with large-scale datasets for training and fine-tuning LLMs, and be responsible for improving system efficiency, response times, and model inference speeds. Collaboration with AI & Product Teams to deploy AI solutions effectively, implementing best practices for secure API design, data privacy, and compliance, and ensuring high-quality, maintainable, and well-documented code following CI/CD best practices are also part of your role. Preferred qualifications for this position include experience in AI-driven EdTech platforms, knowledge of retrieval-augmented generation (RAG) and prompt engineering, familiarity with Docker, Kubernetes, and cloud deployment (Azure, AWS, GCP), as well as exposure to MLOps, model versioning, and continuous AI deployments.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

hyderabad, chennai, bengaluru

Hybrid

Job Description: We are seeking a skilled and motivated Senior Software Engineer with 5+ year s of professional experience to join our dynamic team. The ideal candidate should have a strong grasp of Core Python programming, object-oriented principles, High level design patterns, Debugging and hands-on experience with SQL/NoSQL databases. Working knowledge of cloud platforms (AWS, GCP, Azure, etc.) is an added advantag e. Familiarity with AI/ML technologies is a plus. This role requires a commitment to writing clean, maintainable, and scalable code while embracing a continuous learning mindset. Youll collaborate with cross-functional teams to build robust, high-performance Python applications, with a strong emphasis on test-driven development (TDD) and code quality. Key Responsibilities: Design, develop, and maintain scalable, high-performance Python applications. Design systems with non-linear time complexity and efficient space usage across compute and storage. Ensure stateless, idempotent request processing with no in-memory state. Model schemas for future evolution, supporting increasing data volume and structural changes. Build and operate cloud-based SaaS applications with a focus on production reliability. Design includes not only functional code but also integrated monitoring, alerting, and health checks to ensure observability and operational excellence in a multi-tenant environment. Apply OOP principles to build efficient, reusable software components. Work closely with cross-functional teams to translate business requirements into technical solutions. Contribute across the software development lifecycle: requirements, design, implementation, testing, deployment, and support. Utilize SQL/NoSQL databases (especially MongoDB) for effective data modeling and access patterns. Write robust unit and integration tests following TDD best practices. Participate in code reviews and provide constructive feedback to ensure code qualityand team improvement. Engage in Agile ceremonies, including sprint planning, daily stand-ups, and retrospectives. Communicate effectively with stakeholders to report progress, resolve challenges, and discuss improvements. Stay updated with industry trends and new technologies, especially in Python and cloud development. Required Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 5 years of hands-on Python development experience with strong knowledge of its ecosystem. Solid understanding and practical use of OOP principles. Proven experience working with SQL/NoSQL databases, preferably MongoDB. Working knowledge of cloud environments (AWS, GCP, or Azure). Familiarity with TDD and writing unit/integration tests. Strong collaboration and communication skills. Self-motivated, with a proactive approach to learning and solving problems. Excellent analytical and debugging capabilities. Preferred Qualifications: Experience with additional programming languages or frameworks. Web scraping experience using Python libraries (e.g., BeautifulSoup, Scrapy). Proficiency with Git and collaborative version control workflows. Understanding of Agile/Scrum methodologies. Exposure to AI/ML technologies, including LLMs, prompt engineering, or traditional ML frameworks.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies