Home
Jobs
Companies
Resume

2243 Numpy Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: โ‚น0
Max: โ‚น10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

Remote

Linkedin logo

๐Ÿ“Š Data Analyst Intern ๐Ÿ“ Location: Remote (100% Virtual) ๐Ÿ“… Duration: 3 Months ๐Ÿ’ธ Stipend for Top Interns: โ‚น15,000 ๐ŸŽ Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is an edtech startup focused on delivering industry-aligned, project-based virtual internships. Our Data Analyst Internship is designed to equip students and recent graduates with the analytical skills and practical tools needed to work with real-world data and support business decisions. ๐Ÿš€ Internship Overview As a Data Analyst Intern, you will work on live projects involving data collection, cleaning, analysis, and visualization. You will gain hands-on experience using tools like Excel, SQL, Python, and Power BI/Tableau to extract insights and create impactful reports. ๐Ÿ”ง Key Responsibilities Gather, clean, and organize raw data from multiple sources Perform exploratory data analysis (EDA) to uncover patterns and trends Write efficient SQL queries to retrieve and manipulate data Create interactive dashboards and visual reports using Power BI or Tableau Use Python (Pandas, NumPy, Matplotlib) for data processing and analysis Present findings and recommendations through reports and presentations Collaborate with mentors and cross-functional teams on assigned projects โœ… Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, IT, Statistics, Economics, or a related field Basic knowledge of Excel, SQL, and Python Understanding of data visualization and reporting concepts Strong analytical and problem-solving skills Detail-oriented, with good communication and documentation abilities Eagerness to learn and apply analytical techniques to real business problems ๐ŸŽ“ What Youโ€™ll Gain Practical experience in data analysis, reporting, and business intelligence Exposure to industry tools and real-life data scenarios A portfolio of dashboards and reports to showcase in interviews Internship Certificate upon successful completion Letter of Recommendation for top performers Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

๐Ÿง  Data Science Intern โ€“ Remote | Explore the World of AI & Data Are you fascinated by machine learning, data modeling, and real-world applications of AI? If you're ready to dive into the exciting world of data science, join Skillfied Mentor as a Data Science Intern and start building your future in tech. ๐Ÿ“ Location: Remote / Virtual ๐Ÿ’ผ Job Type: Internship (Unpaid) ๐Ÿ•’ Schedule: Flexible working hours ๐ŸŒŸ About the Internship: As a Data Science Intern , you'll get hands-on exposure to real data problems, machine learning concepts, and practical projects. This internship is designed to give you experience that matters โ€” even without prior industry background. ๐Ÿ”น Work with real datasets to build and test models ๐Ÿ”น Learn tools like Python, Pandas, NumPy, Scikit-Learn, and Jupyter Notebooks ๐Ÿ”น Understand the basics of machine learning and data preprocessing ๐Ÿ”น Collaborate with a remote team to solve business-related challenges ๐Ÿ”น Apply statistics and coding to derive data-driven solutions ๐Ÿ” Youโ€™re a Great Fit If You: โœ… Have basic Python knowledge or are eager to learn โœ… Are curious about AI, data modeling, and machine learning โœ… Can dedicate 5โ€“7 hours per week (flexibly) โœ… Are a self-learner and motivated to grow in the data science field โœ… Want to build a strong project portfolio with real use cases ๐ŸŽ What Youโ€™ll Gain: ๐Ÿ“œ Certificate of Completion ๐Ÿ“‚ Real Projects to Showcase Your Skills ๐Ÿง  Practical Knowledge of Data Science Workflows ๐Ÿ“ˆ Experience with Tools Used by Professionals โณ Last Date to Apply: 20th June 2025 Whether youโ€™re a student, fresher, or career switcher, this internship is your entry point into the dynamic world of Data Science . ๐Ÿ‘‰ Apply now and bring your data science journey to life with Skillfied Mentor. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Scientist In this role, youโ€™ll drive and embed the design and implementation of data science tools and methods, which harness our data to drive market-leading purpose customer solutions Day-to-day, youโ€™ll act as a subject matter expert and articulate advanced data and analytics opportunities, bringing them to life through data visualisation If youโ€™re ready for a new challenge, and are interested in identifying opportunities to support external customers by using your data science expertise, this could be the role for you We're offering this role at vice president level What youโ€™ll do Weโ€™re looking for someone to understand the requirements and needs of our business stakeholders. Youโ€™ll develop good relationships with them, form hypotheses, and identify suitable data and analytics solutions to meet their needs and to achieve our business strategy. Youโ€™ll be maintaining and developing external curiosity around new and emerging trends within data science, keeping up to date with emerging trends and tooling and sharing updates within and outside of the team. Youโ€™ll Also Be Responsible For Proactively bringing together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques, and algorithms Implementing ethically sound models end-to-end and applying software engineering and a product development lens to complex business problems Working with and leading both direct reports and wider teams in an Agile way within multi-disciplinary data to achieve agreed project and Scrum outcomes Using your data translation skills to work closely with business stakeholders to define business questions, problems or opportunities that can be supported through advanced analytics Selecting, building, training, and testing complex machine models, considering model valuation, model risk, governance, and ethics throughout to implement and scale models The skills youโ€™ll need To be successful in this role, youโ€™ll need evidence of project implementation and work experience gained in a data-analysis-related field as part of a multi-disciplinary team. Weโ€™ll also expect you to hold an undergraduate or a masterโ€™s degree in Data science, Statistics, Computer science, or related field. Youโ€™ll also need an experience of 10 years with statistical software, database languages, big data technologies, cloud environments and machine learning on large data sets. And weโ€™ll look to you to bring the ability to demonstrate leadership, self-direction and a willingness to both teach others and learn new techniques. Additionally, Youโ€™ll Need Experience of deploying machine learning models into a production environment Proficiency in Python and relevant libraries such as Pandas, NumPy, Scikit-learn coupled with experience in data visualisation tools. Extensive work experience with AWS Sage maker , including expertise in statistical data analysis, machine learning models, LLMs, and data management principles Effective verbal and written communication skills , the ability to adapt communication style to a specific audience and mentoring junior team members Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Description: โ€ข Graduate degree in a quantitative field (CS, statistics, applied mathematics, machine learning, or related discipline) โ€ข Good programming skills in Python with strong working knowledge of Pythonโ€™s numerical, data analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, etc. โ€ข Experience with SQL, Excel, Tableau/ Power BI, PowerPoint โ€ข Predictive modelling experience in Python (Time Series/ Multivariable/ Causal) โ€ข Experience applying various machine learning techniques and understanding the key parameters that affect their performance โ€ข Experience of building systems that capture and utilize large data sets to quantify performance via metrics or KPIs โ€ข Excellent verbal and written communication โ€ข Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. Roles & Responsibilities: โ€ข Lead a team of Data Engineers, Analysts and Data scientists to carry out following activities: โ€ข Connect with internal / external POC to understand the business requirements โ€ข Coordinate with right POC to gather all relevant data artifacts, anecdotes, and hypothesis โ€ข Create project plan and sprints for milestones / deliverables โ€ข Spin VM, create and optimize clusters for Data Science workflows โ€ข Create data pipelines to ingest data effectively โ€ข Assure the quality of data with proactive checks and resolve the gaps โ€ข Carry out EDA, Feature Engineering & Define performance metrics prior to run relevant ML/DL algorithms โ€ข Research whether similar solutions have been already developed before building ML models โ€ข Create optimized data models to query relevant data efficiently โ€ข Run relevant ML / DL algorithms for business goal seek โ€ข Optimize and validate these ML / DL models to scale โ€ข Create light applications, simulators, and scenario builders to help business consume the end outputs โ€ข Create test cases and test the codes pre-production for possible bugs and resolve these bugs proactively โ€ข Integrate and operationalize the models in client ecosystem โ€ข Document project artifacts and log failures and exceptions. โ€ข Measure, articulate impact of DS projects on business metrics and finetune the workflow based on feedbacks Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Position : AI Engineer Location: Trivandrum (Remote or Hybrid ) Type: Full-time Start Date: Immediate Company : Turilytix.ai About the Role : As an AI Engineer at Turilytix, youโ€™ll build real-time machine learning systems that power BIG-AI , our no-code platform used in paper, supply chain, and IT operations. Help deploy ML in the real world no sandbox, no limits. Responsibilities : โ€ข Design and deploy time-series and anomaly detection models โ€ข Build scalable pipelines for streaming and batch predictions โ€ข Integrate ML models into live product environments โ€ข Optimize models for on-prem or cloud deployments Required Skills : โ€ข Python (NumPy, Pandas, Scikit-learn) โ€ข Time-series ML, classification, regression โ€ข ML Ops tools: MLFlow, Docker, FastAPI โ€ข OS: Linux โ€ข Bonus: Edge AI, Git, ONNX, Kubernetes What You Get : โ€ข Real-world AI use cases โ€ข Fast-paced learning and ownership โ€ข Hybrid flexibility, global impact Apply at: hr@turilytix.ai Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

๐Ÿ“ Location: Remote ๐Ÿ’ผ Type: Internship (Unpaid) ๐Ÿ•’ Duration: Flexible (Learn at your own pace!) ๐Ÿ“… Application Deadline: 7th June 2025 ๐Ÿ“Š About TechNest Intern TechNest Intern , we are committed to helping learners move beyond theory by providing a practical, remote-first internship experience in the field of Data Science . This program is specially designed for students and freshers looking to apply their analytical skills on real datasets , develop ML models , and understand how data powers smart decisions in todayโ€™s world. ๐Ÿš€ ๐Ÿ’ผ Role: Data Science Intern Curious about the patterns behind the numbers? Want to work with data that drives real impact? This role is perfect for aspiring data scientists who want to get hands-on with data cleaning, analysis, machine learning, and storytelling. Youโ€™ll work on mini-projects, gain mentorship, and explore tools like Python, Pandas, and scikit-learn, while also improving your data visualization and communication skills. ๐Ÿ“Œ Key Responsibilities ๐Ÿ“Š Collect, clean, and explore real-world datasets ๐Ÿ“ˆ Perform Exploratory Data Analysis (EDA) to find trends and insights ๐Ÿค– Build basic Machine Learning models using libraries like scikit-learn ๐Ÿ“š Work with tools like NumPy, Pandas, Matplotlib, Seaborn, Jupyter ๐Ÿง  Understand feature engineering, model evaluation, and tuning ๐Ÿ—‚๏ธ Present insights through charts, notebooks, or dashboards ๐Ÿ” Learn and apply statistical thinking to solve problems ๐Ÿ‘ฅ Who Should Apply? ๐ŸŽ“ Students or recent graduates with interest in Data Science / Analytics ๐Ÿ’ก Individuals familiar with Python and basic statistics ๐Ÿ“‰ Learners who enjoy working with numbers and uncovering patterns ๐Ÿง  Problem-solvers looking to turn data into meaningful stories ๐Ÿš€ Beginners who want to build a portfolio-ready data project ๐ŸŽ Perks & Learning Outcomes ๐Ÿ“œ Official Offer Letter & Certificate of Completion ๐ŸŒ Remote & Flexible Schedule โ€” no fixed hours, learn your way ๐Ÿ’ป Project-Based Learning โ€” build a strong data science portfolio ๐Ÿง  Learn tools like Pandas, NumPy, Matplotlib, Seaborn, scikit-learn, Power BI ๐Ÿ“Š Get mentorship in EDA, Machine Learning, and Data Storytelling ๐Ÿ† โœจ New! "Intern of the Week Certificate" โ€” awarded weekly to interns showing outstanding effort, innovation, and growth ๐ŸŽฏ Gain exposure to how real-world data challenges are tackled in tech and business ๐Ÿš€ How to Apply ๐Ÿ“ฉ Ready to dive into data and level up your skills? Submit your application and start your journey to becoming a data-driven decision maker ! Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Role : Financial Analysts and Advisors For Workflow Annotation Specialist Project Type: Contract-based / Freelance / Part-time โ€“ 1 Month Job Overview: We are seeking domain experts to participate in a Workflow Annotation Project . The role involves documenting and annotating the step-by-step workflows of key tasks within the candidateโ€™s area of expertise. The goal is to capture real-world processes in a structured format for AI training and process optimization purposes. Domain Expertise Required : Collect market and company data, build/maintain financial models, craft decks, track portfolios, run risk and scenario analyses, develop client recommendations, and manage CRM workflows. Tools & Technologies You May have Worked: Commercial Software โ€‘ Bloomberg Terminal, Refinitiv Eikon, FactSet, Excel, PowerPoint, Salesforce FSC, Redtail, Wealthbox, Orion Advisor Tech, Morningstar Office, BlackRock Aladdin, Riskalyze, Tolerisk, eMoney Advisor, MoneyGuidePro, Tableau, Power BI. Open / Free Software โ€‘ LibreOffice Calc, Google Sheets, Python (Pandas, yfinance, NumPy, SciPy, Matplotlib), R (QuantLib, tidyverse), SuiteCRM, EspoCRM, Plotly Dash, Streamlit, Portfolio Performance, Ghostfolio, Yahoo Finance API, Alpha Vantage, IEX Cloud (free tier). Show more Show less

Posted 3 days ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Job Title : Data Scientist - Computer Vision & Generative AI. Location : Mumbai. Experience Level : 3 to 4 years. Employment Type : Full-time. Industry : Renewable Energy / Solar Services. Job Overview We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we'd love to hear from you. Key Responsibilities Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g. , GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements 3-4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries : NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks : OpenCV, PyTorch or TensorFlow, Detectron2, YOLOv5/v8, MMDetection, etc. Experience with generative AI models like GANs, Stable Diffusion, or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow, TorchServe, or TensorFlow Serving. Familiarity with image annotation tools (e.g. , CVAT, Labelbox), and data versioning tools (e.g. , DVC). Experience with cloud platforms (AWS, GCP, or Azure) for storage, training, or model deployment. Experience with Docker, Git, and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow, Airflow, or SageMaker Pipelines. Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

E2M is not your regular digital marketing firm. We're an equal opportunity provider, founded on strong business ethics and driven by more than 300 experienced professionals. Our client base is made up of digital agencies that rely on us to solve bandwidth issues, reduce overheads, and boost profitability. We need driven, tech-savvy professionals like you to help us deliver next-gen solutions. If you're someone who dreams big and thrives in innovation, E2M has a place for you. Role Overview As an Python Developer โ€“ AI Implementation Specialist/AI Executor , you will be responsible for designing and integrating AI capabilities into production systems using Python and key ML libraries. This role requires a strong backend development foundation and a proven track record of deploying AI use cases using tools like TensorFlow, Keras, or OpenAI APIs. You'll work cross-functionally to deliver scalable AI-driven solutions. Key Responsibilities Design and develop backend solutions using Python, with a focus on AI-driven features. Implement and integrate AI/ML models using tools like OpenAI, Hugging Face, or Lang Chain. Use core Python libraries (NumPy, Pandas, TensorFlow, Keras) to process data, train, or implement models. Translate business needs into AI use cases and deliver working solutions. Collaborate with product, engineering, and data teams to define integration workflows. Develop REST APIs and micro services to deploy AI components within applications. Maintain and optimize AI systems for scalability, performance, and reliability. Keep pace with advancements in the AI/ML landscape and evaluate tools for continuous improvement. Required Skills & Qualifications Minimum 5+ years of overall experience, including at least 1 year in AI/ML integration and strong hands-on expertise in Python for backend development. Proficiency in libraries such as NumPy, Pandas, TensorFlow, and Keras Practical exposure to AI platforms/APIs (e.g., OpenAI, LangChain, Hugging Face) Solid understanding of REST APIs, microservices, and integration practices Ability to work independently in a remote setup with strong communication and ownership Excellent problem-solving and debugging capabilities Show more Show less

Posted 3 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Tirupati

Remote

Apna logo

About the RoleWe are looking for a passionate and knowledgeable Python & Data Science Instructor to teach and mentor students in our [online / in-person] data science program. Youโ€™ll play a key role in delivering engaging lessons, guiding hands-on projects, and supporting learners as they build real-world skills in Python programming and data science. This is a great opportunity for professionals who love teaching and want to empower the next generation of data scientists. ๐Ÿ“š ResponsibilitiesTeach core topics including: Python fundamentals Data manipulation with pandas and NumPy Data visualization using matplotlib/seaborn/plotly Machine learning with scikit-learn Jupyter Notebooks, data cleaning, and exploratory data analysis Deliver live or recorded lectures, tutorials, and interactive sessions Review and provide feedback on student projects and assignments Support students via Q&A sessions, forums, or 1-on-1 mentoring Collaborate with curriculum designers to refine and improve content Stay updated with the latest industry trends and tools โœ… RequirementsStrong proficiency in Python and the data science stack (NumPy, pandas, matplotlib, scikit-learn, etc.) Hands-on experience with real-world data projects or industry experience in data science Prior teaching, mentoring, or public speaking experience (formal or informal) Clear communication and ability to explain complex topics to beginners [Bachelorโ€™s/Masterโ€™s/PhD] in Computer Science, Data Science, Statistics, or a related field (preferred but not required) โญ Bonus PointsExperience with deep learning frameworks (TensorFlow, PyTorch) Familiarity with cloud platforms (AWS, GCP, Azure) Experience teaching online using tools like Zoom, Slack, or LMS platforms Contribution to open-source, GitHub portfolio, or Kaggle participation ๐Ÿš€ What We OfferFlexible working hours and remote-friendly environment Opportunity to impact learners around the world Supportive and innovative team culture Competitive pay and performance incentives

Posted 3 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Machine Learning Engineer In this role, youโ€™ll be driving and embedding the deployment, automation, maintenance and monitoring of machine learning models and algorithms Day-to-day, youโ€™ll make sure that models and algorithms work effectively in a production environment while promoting data literacy education with business stakeholders If you see opportunities where others see challenges, youโ€™ll find that this solutions-driven role will be your chance to solve new problems and enjoy excellent career development What youโ€™ll do Your daily responsibilities will include you collaborating with colleagues to design and develop advanced machine learning products which power our group for our customers. Youโ€™ll also codify and automate complex machine learning model productions, including pipeline optimisation. Weโ€™ll expect you to transform advanced data science prototypes and apply machine learning algorithms and tools. Youโ€™ll also plan, manage, and deliver larger or complex projects, involving a variety of colleagues and teams across our business. Youโ€™ll Also Be Responsible For Understanding the complex requirements and needs of business stakeholders, developing good relationships and how machine learning solutions can support our business strategy Working with colleagues to productionise machine learning models, including pipeline design and development and testing and deployment, so the original intent is carried over to production Creating frameworks to ensure robust monitoring of machine learning models within a production environment, making sure they deliver quality and performance Understanding and addressing any shortfalls, for instance, through retraining Leading direct reports and wider teams in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and Scrum outcomes The skills youโ€™ll need To be successful in this role, youโ€™ll need to have a good academic background in a STEM discipline, such as Mathematics, Physics, Engineering or Computer Science. Youโ€™ll also have the ability to use data to solve business problems, from hypotheses through to resolution. Weโ€™ll look to you to have experience of at least twelve years with machine learning on large datasets, as well as experience building, testing, supporting, and deploying advanced machine learning models into a production environment using modern CI/CD tools, including git, TeamCity and CodeDeploy. Youโ€™ll Also Need A good understanding of machine learning approaches and algorithms such as supervised or unsupervised learning, deep learning, NLP with a strong focus on model development, deployment, and optimization Experience using Python with libraries such as NumPy, Pandas, Scikit-learn, and TensorFlow or PyTorch An understanding of PySpark for distributed data processing and manipulation with AWS (Amazon Web Services) including EC2, S3, Lambda, SageMaker, and other cloud tools. Experience with data processing frameworks such as Apache Kafka, Apache Airflow and containerization technologies such as Docker and orchestration tools such as Kubernetes Experience of building GenAI solutions to automate workflows to improve productivity and efficiency Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

PYTHON DEVELOPER Key Responsibilities: Your primary focus will be to develop, test, and maintain automation scripts that support Cyber Security Advisory at Ontinue. Working collaboratively with engineers and security specialists, you will help identify areas where automation can enhance efficiency, reduce manual effort, and enhance the customer experience. Beyond writing scripts, you will also be responsible for debugging and troubleshooting automation issues, ensuring that all code adheres to security best practices and industry standards. Maintaining comprehensive documentation will be a key part of your role, ensuring that workflows, processes, and automation scripts are well-documented for future reference and scalability. Staying up to date with industry trends and new automation technologies will be essential. You will be encouraged to bring fresh ideas and innovative solutions that contribute to the ongoing evolution of our platform, ensuring that Ontinue remains at the forefront of MDR innovation. Work Location & Schedule: This role can be remote or based in our Noida office. You must be available for late shifts at least two days per week to collaborate effectively with the head of Cyber Advisory USA (US โ€“ Central Time) and the US-based team. Additional late shifts may be required based on project needs. Key Responsibilities: Develop, test, and maintain automation scripts in Python to optimize and enhance the ION MDR Platform. Collaborate with security analysts, engineers, and stakeholders to identify automation opportunities and improve operational efficiency. Write clean, maintainable, and efficient Python code, following industry best practices. Debug and troubleshoot automation scripts, ensuring reliability and performance. Document scripts, workflows, and automation processes for future reference and knowledge sharing. Ensure that automation scripts follow security best practices, adhering to industry standards and compliance requirements. Stay up to date with emerging automation technologies and best practices, bringing innovative ideas to the team. Qualifications & Experience: We are looking for a Python developer with a strong background in automation, who has at least three years of hands-on experience working with Python in a security or operational automation environment. You Should Have Experience With: Cloud platforms such as Azure and Microsoft Graph API. Familiarity with SIEM, SOAR, and security automation tools. CI/CD pipelines and version control tools like Git, GitHub, or GitLab. RESTful APIs and integrating them into automation workflows. Data structures and algorithms for efficient automation processes. Willing to start later and finish later to work with the US time zone-based team Preferred Skills & Competencies: While not mandatory, experience with the following is highly desirable: Data analysis tools like Pandas or NumPy to process security-related data. Python automation frameworks such as Selenium, PyAutoGUI, etc. Networking fundamentals and system administration to support security automation tasks. Additional scripting languages such as Bash or PowerShell for extended automation capabilities. What We Offer: We have been recognized as a TOP place to work! In addition to a competitive salary, we also offer great benefits including 18 days off a year, an annual subscription to Headspace, recognition awards, anniversary rewards, monthly phone allowance and access to management and Microsoft training. Come as you are! We search for amazing people of diverse backgrounds, experiences, abilities, and perspectives. Ontinue welcomes and encourages diversity in the workplace regardless of race, gender, religion, age, sexual orientation, disability, or veteran status. Next Steps: If you have the skills and experience required and feel that Ontinue is a place you can belong to, we would love to get to know you better! Please drop an application for this role and our talent acquisition manager will be in touch to discuss further. Learn More: www.ontinue.com. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Location: Remote/Hybrid (India-based preferred) Type: Full-Time Must Haves (Donโ€™t Apply If You Miss Any) 3+ years experience in Data Engineering Proven hands-on with ETL pipelines (end-to-end ownership) AWS Resources: Deep experience with EC2, Athena, Lambda, Step Functions (non-negotiable; critical to the role) Strong with MySQL (not negotiable) Docker (setup, deployment, troubleshooting) Good To Have (Adds Major Value) Airflow or any modern orchestration tool PySpark experience Python Ecosystem SQL Alchemy DuckDB PyArrow Pandas Numpy DLT (Data Load Tool). About You Youโ€™re a builder, not just a maintainer. You can work independently but communicate crisply. You thrive in fast-moving, startup environments. You care about ownership and impact, not just code. Include the Code word Red Panda in your message application, so that we know you have read this section. What Youโ€™ll Do Architect, build, and optimize robust data pipelines and workflows Own AWS resource configuration, optimization, and troubleshooting Collaborate with product and engineering teams to deliver business impact fast Automate and scale data processesโ€”no manual work culture Shape the data foundation for real business decisions Cut to the chase. Only serious, relevant applicants will be considered. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is dedicated to connecting talented professionals with dynamic companies in India. Our goal is to streamline the hiring process and provide exceptional staffing solutions tailored to our clients' needs. At Viraaj HR Solutions, we prioritize building lasting relationships and ensuring a diverse workforce. We foster a culture of collaboration, innovation, and integrity, aiming to make a positive impact in the recruitment industry. Role Responsibilities Develop and maintain Python scripts for automation tasks. Create scripts for data analysis and reporting. Collaborate with the development team to integrate Python applications with existing systems. Implement API integrations for external data sources. Conduct unit testing and debugging of scripts to ensure optimal functionality. Optimize the performance of existing scripts for efficiency and reliability. Assist in database management and data retrieval using Python. Document code changes and maintain version control using Git. Monitor and troubleshoot scripting issues as they arise. Participate in code reviews and provide constructive feedback. Stay updated with industry trends and emerging technologies. Train and mentor junior developers in Python scripting best practices. Collaborate with stakeholders to gather requirements for new scripting projects. Ensure compliance with company coding standards and guidelines. Prepare technical documentation for developed scripts and systems. Provide technical support for deployed scripts and applications. Qualifications Bachelor's degree in Computer Science or a related field. Proven experience in Python scripting and programming. Strong understanding of scripting languages and frameworks. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with debugging tools and practices. Knowledge of version control systems, particularly Git. Experience with APIs and web services integration. Proficiency in database management (SQL or NoSQL). Excellent problem-solving skills and analytical thinking. Ability to work independently and as part of a team. Strong communication skills and attention to detail. Previous experience with system automation is an advantage. Knowledge of code optimization techniques. Experience in mentoring or training junior staff is a plus. Familiarity with Agile/Scrum methodologies. Willingness to learn and adapt to new technologies. Skills: data manipulation,api integration,data analysis,problem-solving,agile/scrum methodologies,database management,scripting,version control (git),version control,unit testing,data manipulation (pandas, numpy),python scripting,system automation,scripting languages,automation,python Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Avant de postuler ร  un emploi, sรฉlectionnez votre langue de prรฉfรฉrence parmi les options disponibles en haut ร  droite de cette page. Dรฉcouvrez votre prochaine opportunitรฉ au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunitรฉs innovantes, dรฉcouvrez notre culture enrichissante et travaillez avec des รฉquipes talentueuses qui vous poussent ร  vous dรฉvelopper chaque jour. Nous savons ce quโ€™il faut faire pour diriger UPS vers l'avenir : des personnes passionnรฉes dotรฉes dโ€™une combinaison unique de compรฉtences. Si vous avez les qualitรฉs, de la motivation, de l'autonomie ou le leadership pour diriger des รฉquipes, il existe des postes adaptรฉs ร  vos aspirations et ร  vos compรฉtences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masterโ€™s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, รฉgalitรฉ des chances, traitement รฉquitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachรฉs. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowโ€”people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masterโ€™s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowโ€”people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masterโ€™s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

8 - 10 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Strong in Python and experience with Jupyter notebooks, Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. Must have: Experience with machine learning lifecycle, including data preparation, training, evaluation, and deployment Must have: Hands-on experience with GCP services for ML & data science Must have: Experience with Vector Search and Hybrid Search techniques Must have: Experience with embeddings generation using models like BERT, Sentence Transformers, or custom models Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) Must have: Understanding of semantic vs lexical search paradigms Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) Should be proficient in SQL and BigQuery for analytics and feature generation Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark Should have experience deploying models and services using Vertex AI, Cloud Run, or Cloud Functions Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch) and blending with vector-based approaches Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval Good to have: Familiarity with TensorFlow Hub, Hugging Face, or other model repositories Good to have: Experience with prompt engineering, context windowing, and embedding optimization for LLM-based systems Should understand how to build end-to-end ML pipelines for search and ranking applications Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k, recall, nDCG, MRR) Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI: Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage: BigQuery, Cloud Storage, Firestore Ingestion: Pub/Sub, Cloud Functions, Cloud Run Search: Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute: Cloud Run, Cloud Functions, Vertex Pipelines, Cloud Dataproc (Spark/PySpark) CI/CD & IaC: GitLab/GitHub Actions Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 3 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Data Ops Capability Deployment - Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new solutions/frameworks/techniques and the improvement of processes and workflow for Enterprise Data function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. The primary purpose of this role is to perform data analytics and data analysis across different asset classes, and to build data science/Tooling capabilities within the team. This will involve working closely with the wider Enterprise Data team, in particular the front to back leads to deliver business priorities. The following role is within B & I Data Capabilities team within the Enterprise Data. The team manages the Data quality/Metrics/Controls program in addition to a broad remit to implement and embed improved data governance and data management practices throughout the region. The Data quality program is centered on enhancing Citiโ€™s approach to data risk and addressing regulatory commitments in this area. Key Responsibilities: Hands on with data engineering background and have thorough understanding of Distributed Data platforms and Cloud services. Sound understanding of data architecture and data integration with enterprise applications Research and evaluate new data technologies, data mesh architecture and self-service data platforms Work closely with Enterprise Architecture Team on the definition and refinement of overall data strategy Should be able to address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities. Ability to perform complex data analytics (data cleansing, transformation, joins, aggregation etc.) on large complex datasets. Build analytics dashboards & data science capabilities for Enterprise Data platforms. Communicate complicated findings and propose solutions to a variety of stakeholders. Understanding business and functional requirements provided by business analysts and convert into technical design documents. Work closely with cross-functional teams e.g. Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support. Prepare handover documents and manage SIT, UAT and Implementation. Demonstrate an in-depth understanding of how the development function integrates within overall business/technology to achieve objectives; requires a good understanding of the banking industry. Performs other duties and functions as assigned. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Skills & Qualifications 10 + years of active development background and experience in Financial Services or Finance IT is a required. Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools Hands on experience for ETL using PySpark on distributed platforms along with data ingestion, Spark optimization, resource utilization, capacity planning & batch orchestration. In depth understanding of Hive, HDFS, Airflow, job scheduler Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy) Should be able to write complex SQL/Stored Procs Should have worked on DevOps, Jenkins/Lightspeed, Git, CoPilot. Strong knowledge in one or more of the BI visualization tools such as Tableau, PowerBI. Proven experience in implementing Datalake/Datawarehouse for enterprise use cases. Exposure to analytical tools and AI/ML is desired. Education: Bachelor's/University degree, master's degree in information systems, Business Analysis / Computer Science. ------------------------------------------------------ Job Family Group: Data Governance ------------------------------------------------------ Job Family: Data Governance Foundation ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citiโ€™s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

50 - 55 Lacs

Bengaluru

Work from Office

Naukri logo

About the Opportunity Job TypeApplication 31 July 2025 TitleSenior Analyst Programmer DepartmentTechnology - Corporate Enablers (CFO & CE Technology) LocationBanaglore , India Reports ToSenior Manager Type Department Overview The CFO and CE Cloud Technology function provides systems development, implementation and support services for FILs Corporate enablers Team. We support several functions spanning across Business Finance & Management Accounting, Financial Accounting & Analytics, Taxation, Global Procurement, Corporate Treasury, and several other teams in all of FILs international locations, including UK, Japan, China and India. We provide IT services to the Fidelity International businesses, globally. These include development and support of business functions that underpin our financial accounting and decision making for global CFO Orgs, and we implement multiple systems including ERP platforms, home grown apps and third party products. We are system providers to key process lifecycles such as Procure to Pay (P2P/Global Procurement), Record to Report (R2R), Order to Cash (O2C) and Acquire to Retire (A2R). We also manage systems to enable cash management, forex trading and treasury operations across the Globe. We own warehouses that consolidate data from across the organisations functions to provide meaningful insights. We are seeking a skilled and experienced Python Developer to join our team. The ideal candidate will have a strong background in API development and PLSQL Store procedures along with good understanding of Kubernetes,AWS,SnapLogic cloud-native technologies.This role requires deep technical expertise and the ability to work in a dynamic and fast-paced environment. Essential Skills Must have technical skills Knowledge of latest Python frameworks and technologies (e.g., Django, Flask, FastAPI) Experience with Python libraries and tools (e.g., Pandas, NumPy, SQLAlchemy) Strong experience in designing, developing, and maintaining RESTful APIs. Familiarity with API security, authentication, and authorization mechanisms (e.g., OAuth, JWT) Good experience and hands-on knowledge of PL/SQL (Packages/Functions/Ref cursors) Experience in development & low-level design of Warehouse solutions Familiarity with Data Warehouse, Datamart and ODS concepts Knowledge of data normalisation and Oracle performance optimisation techniques Good to have technical skills: Hands-on experience with Kubernetes for container orchestration Knowledge of deploying, managing, and scaling applications on Kubernetes clusters Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda). Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Experience with SnapLogic cloud-native integration platform. Ability to design and implement integration pipelines using SnapLogic. Key Responsibilities Develop and maintain high-quality Python code for API services. Design and implement containerized applications using Kubernetes. Utilize AWS services for cloud infrastructure and deployment. Create and manage integration pipelines using SnapLogic. Write and optimize PL/SQL stored procedures for database operations. Collaborate with cross-functional teams to deliver high-impact solutions. Ensure code quality, security, and performance through best practices. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 5 to 7 years of experience with application development on Python language, API development along with Oracle RDBMS, SQL, PL/SQL Personal Characteristics Excellent communication skills both verbal and written Strong interest in Technology and its applications Self-motivated and Team Player Ability to work under pressure and meet deadlines Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 3 days ago

Apply

4.0 years

40 - 50 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers donโ€™t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges โ€” this role is for you. Youโ€™ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What Youโ€™ll Do Build and own AI-backed features end to end, from ideation to production โ€” including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations โ€” from custom masks and color space conversions to geometric warps and contour ops โ€” with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague โ€” youโ€™ll help define what โ€œgoodโ€ looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs โ€” you own your domain What Youโ€™ll Need 2โ€“3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN โ€” including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1โ€“2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors โ€” e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach โ€” you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables โ€” you write clean, tested code and debug edge cases until theyโ€™re truly fixed The ability to frame problems from scratch and work without strict handoffs โ€” you build from a goal, not a ticket Who You Are Youโ€™ve built systems โ€” not just prototypes You care about both ML results and the systemโ€™s behavior in production Youโ€™re comfortable taking a rough business goal and shaping the technical path to get there Youโ€™re energized by product-focused AI work โ€” things that users feel and rely on Youโ€™ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

4.0 years

40 - 50 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers donโ€™t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges โ€” this role is for you. Youโ€™ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What Youโ€™ll Do Build and own AI-backed features end to end, from ideation to production โ€” including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations โ€” from custom masks and color space conversions to geometric warps and contour ops โ€” with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague โ€” youโ€™ll help define what โ€œgoodโ€ looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs โ€” you own your domain What Youโ€™ll Need 2โ€“3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN โ€” including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1โ€“2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors โ€” e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach โ€” you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables โ€” you write clean, tested code and debug edge cases until theyโ€™re truly fixed The ability to frame problems from scratch and work without strict handoffs โ€” you build from a goal, not a ticket Who You Are Youโ€™ve built systems โ€” not just prototypes You care about both ML results and the systemโ€™s behavior in production Youโ€™re comfortable taking a rough business goal and shaping the technical path to get there Youโ€™re energized by product-focused AI work โ€” things that users feel and rely on Youโ€™ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

4.0 years

40 - 50 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers donโ€™t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges โ€” this role is for you. Youโ€™ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What Youโ€™ll Do Build and own AI-backed features end to end, from ideation to production โ€” including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations โ€” from custom masks and color space conversions to geometric warps and contour ops โ€” with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague โ€” youโ€™ll help define what โ€œgoodโ€ looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs โ€” you own your domain What Youโ€™ll Need 2โ€“3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN โ€” including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1โ€“2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors โ€” e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach โ€” you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables โ€” you write clean, tested code and debug edge cases until theyโ€™re truly fixed The ability to frame problems from scratch and work without strict handoffs โ€” you build from a goal, not a ticket Who You Are Youโ€™ve built systems โ€” not just prototypes You care about both ML results and the systemโ€™s behavior in production Youโ€™re comfortable taking a rough business goal and shaping the technical path to get there Youโ€™re energized by product-focused AI work โ€” things that users feel and rely on Youโ€™ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

4.0 years

40 - 50 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers donโ€™t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges โ€” this role is for you. Youโ€™ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What Youโ€™ll Do Build and own AI-backed features end to end, from ideation to production โ€” including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations โ€” from custom masks and color space conversions to geometric warps and contour ops โ€” with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague โ€” youโ€™ll help define what โ€œgoodโ€ looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs โ€” you own your domain What Youโ€™ll Need 2โ€“3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN โ€” including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1โ€“2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors โ€” e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration โ€” model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach โ€” you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables โ€” you write clean, tested code and debug edge cases until theyโ€™re truly fixed The ability to frame problems from scratch and work without strict handoffs โ€” you build from a goal, not a ticket Who You Are Youโ€™ve built systems โ€” not just prototypes You care about both ML results and the systemโ€™s behavior in production Youโ€™re comfortable taking a rough business goal and shaping the technical path to get there Youโ€™re energized by product-focused AI work โ€” things that users feel and rely on Youโ€™ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies