Home
Jobs

1492 Clustering Jobs - Page 43

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description About CloudBees CloudBees provides the leading software delivery platform for enterprises, enabling them to continuously innovate, compete, and win in a world powered by the digital experience. Designed for the world's largest organizations with the most complex requirements, CloudBees enables software development organizations to deliver scalable, compliant, governed, and secure software from the code a developer writes to the people who use it. The platform connects with other best-of-breed tools, improves the developer experience, and enables organizations to bring digital innovation to life continuously, adapt quickly, and unlock business outcomes that create market leaders and disruptors. CloudBees was founded in 2010 and is backed by Goldman Sachs, Morgan Stanley, Bridgepoint Credit, HSBC, Golub Capital, Delta-v Capital, Matrix Partners, and Lightspeed Venture Partners. Visit www.cloudbees.com and follow us on Twitter, LinkedIn, and Facebook Position Description Cloudbees, the leader in CI/CD (Continuous Integration and Delivery) and the company behind Jenkins, is seeking an experienced QE Engineer. CloudBees Flow and CloudBees Core are complex products which span technologies ranging from distributed systems, clustering, databases, multi-thread processing, complex scheduling and much more. CloudBees Build Acceleration is a build and test acceleration platform for Make-based, Ninja-based, and Visual Studio build environments that intelligently and automatically parallelizes software tasks across clusters of physical or cloud CPUs to dramatically lower build and test cycle times. CloudBees Build Acceleration reduces software build times by distributing the build over a large cluster of inexpensive servers. By using a dependency management system, CloudBees Build Acceleration identifies and fixes problems in real time that would break traditional parallel builds. CloudBees Build Acceleration seamlessly plugs into existing software development environments and includes web-based management and reporting tools. In this role, the individual is expected to develop and maintain automated tests using Playwright, Selenide & other test frameworks to ensure comprehensive test coverage. Executing various types of testing of the CD-RO and Accelerator products to ensure the scalability and performance requirements of our customers, while collaborating in Agile environments. Perform both automated (50%) and manual (20%) testing, as well as various other DevOps activities to supporting QA process (10%) Load/Stress testing of the Web applications and backend Create and update test scripts, reports and other test documentation Collaborate with product management, support and engineering teams in order to establish priorities, understand requirements, formulate test plans and execute them accordingly Work with customer support teams to debug customer trouble tickets and reproduce them when necessary. Additional responsibilities include debugging test failures, tracking issues in Jira, and integrating tests into CI/CD pipelines The Cloudbees Flow team places high value on quality with the expectation that the QE team serves as the ultimate gatekeeper for certifying the Release Readiness of the software. The ideal candidate is expected to think outside the box in terms of striking the right balance between automation, test coverage (functionality as well as performance of the software) and the speed of testing. Essential Skills Desired Skills and Experience Bachelor’s degree in Computer Science, Engineering or related field 10+ years of experience with automation testing frameworks 2+ years of experience with performance testing tool as JMeter or Gatling Experience with CI/CD processes and tools such as: Jenkins, GitLab, TeamCity Experience with Java, Groovy or Kotlin Experience with at least one of the ?loud computing services: Google Cloud Platform, Amazon Web Services or Microsoft Azure Experience with command line interfaces on Linux Experience with Kubernetes and Helm Experience with UI & API test automation tools, particularly in Java stack (e.g. RoadRunner, JMeter, Gatling, Selenium, Playwright, API testing). Working experience with databases setup and managing such as Oracle, MySQL, MS SQL, MariaDB, PostgreSQL Nice to haves Previous experience with Test Management/ Reporting Tools. Previous experience with bash scripting and other command line interfaces on Linux, MacOS and Windows. Responsibilities Be part of a 2 week Sprint executing payloads in lockstep with Developers. Develop a germane understanding of the plugin assembly line process, starting from design to delivery and follow them. Where necessary improve existing processes and become advocates for delivering top quality software. Work as an independent contributor collaborating with a team of developers and other test engineers. Collaborate with product management, support and engineering teams in order to establish priorities. What You’ll Get Highly competitive benefits and vacation package Ability to work for one of the fastest growing companies with some of the most talented people in the industry Team outings Fun, Hardworking, and Casual Environment Endless Growth Opportunities At CloudBees, we truly believe that the more diverse we are, the better we serve our customers. A global community like Jenkins demands a global focus from CloudBees. Organizations with greater diversity—gender, racial, ethnic, and global—are stronger partners to their customers. Whether by creating more innovative products, or better understanding our worldwide customers, or establishing a stronger cross-section of cultural leadership skills, diversity strengthens all aspects of the CloudBees organization. In the technology industry, diversity creates a competitive advantage. CloudBees customers demand technologies from us that solve their software development, and therefore their business problems, so that they can better serve their own customers. CloudBees attributes much of its success to its worldwide work force and commitment to global diversity, which opens our proprietary software to innovative ideas from anywhere. Along the way, we have witnessed firsthand how employees, partners, and customers with diverse perspectives and experiences contribute to creative problem solving and better solutions for our customers and their businesses. Scam Notice Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of CloudBees. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that CloudBees will never ask for any personal account information, such as cell phone, credit card details or bank account numbers, during the recruitment process. Additionally, CloudBees will never send you a check for any equipment prior to employment. All communication from our recruiters and hiring managers will come from official company email addresses (@cloudbees.com) or from Paylocity and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent CloudBees and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at tahelp@cloudbees.com. We take these matters very seriously and will work to ensure that any fraudulent activity is reported and dealt with appropriately. If you feel like you have been scammed in the US, please report it to the Federal Trade Commission at: https://reportfraud.ftc.gov/#/. In Europe, please contact the European Anti-Fraud Office at: https://anti-fraud.ec.europa.eu/olaf-and-you/report-fraud_en Signs of a Recruitment Scam Ensure there are no other domains before or after @cloudbees.com. For example: “name.dr.cloudbees.com” Check any documents for poor spelling and grammar – this is often a sign that fraudsters are at work. If they provide a generic email address such as @Yahoo or @Hotmail as a point of contact. You are asked for money, an “administration fee”, “security fee” or an “accreditation fee”. You are asked for cell phone account information. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We’re building the next-generation AI storage system, catering the needs for massive scale AI factories to meet unique demands of modern GenAI era. The modernization will deliver unparalleled performance, immense value and exceptional experiences for our Customers by modernizing and scaling the stack through development of several advanced technologies spanning storage management, memory management, clustering, Filesystems, distributed systems and performance for our next-gen software-defined storage platform. Most importantly, we’re modernizing with AI to accelerate our execution to streamline and standardize processes and reimagining work and customer experiences. Join us to do the best work of your career and make a profound social impact as a Software Principal Engineer on our Software Engineering Team in Bangalore . What You’ll Achieve As a Software Principal Engineer, you will be part of the Protocols development team to innovate and deliver next-gen high-performance NFS support. You will collaborate and work closely with our Global engineering talents and will have significant opportunities to innovate and modernize the next-gen storage platform. You will: Design, develop and deliver protocols support for next-gen AI storage platform. Contribute to the design and architecture of high-quality, complex systems and software/storage environments Prepare, review and evaluate software/storage specifications for products and systems Contribute to the development and implementation of test strategies for complex software products and systems/for storage products and systems Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Hands-on coding experience in C/C++, Python Experience in Filesystem internals, Linux, Kernel, VFS/NFS Solid Understanding of concurrency and synchronization Strong Object-oriented design, data structures and algorithms knowledge Agile-based development experiences. Networking and storage troubleshooting skills. Desirable Requirements 8+ years of related experience. Bachelor’s or Master's degree in computer science or related field Understanding of distributed systems architecture, memory management will be a plus. Who We Are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 30th March 2025 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here. "#NJP" Job ID: R263260 Show more Show less

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Khammam, Telangana, India

On-site

Linkedin logo

Location Name: Khammam Job Purpose “This position is open with Bajaj Finance ltd.” Duties And Responsibilities  Provide analytical solutions through statistical modeling, credit policy and strategy, reporting and data analysis for the BFL businesses  Monitor, maintain and improve all scorecards, policies and processes across portfolios and ensure its effectiveness  Support any adhoc deep dive data analysis on portfolio matrices  Track and improve key performance indicators, losses and portfolio quality. Provide deep dive analysis on portfolio matrices.  Building of ML based models to achieve maximum match and catch rate.  Building of ML based capabilities across the organization as on when required.  Work closely with business team to understand their need and provide Analytical solution.  Assess if any early warning signals using data analysis and segmentations and take pro-active policy actions as and when required  Support in managing and improving various offer strategies, control offer generation and distribution through data analysis  Work closely with Product, Sales and Risk teams to support business growth and drive new initiatives  Ongoing liaising with IT, Credit and BIU teams to ensure all policies, processes, data flow are working efficiently and all required changes are build and implemented suitably Required Qualifications And Experience  Relevant analytical experience in Scorecard development, ML modelling, Segmentation and Clustering.  Preferred languages: SAS, SQL, R/Python.  Classical stat techniques: Regression, Logistic regression, Clustering, Dimensionality reduction techniques, Hypothesis testing.  ML algo: KNN, NBM, DT, CART, Boosting & Bagging models, SVM, Neural net, Ensemble models etc.  Experience in handling huge data base and the ability to do root cause analysis.  Individual contributor with the capability to deliver projects within timeline  Effective verbal and written communication skills.  MBA / Post Graduate with 2-4 years’ experience in financial services Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Find your next role with MedGenome Labs Ltd. We are the market leader in clinical genomic space in India and offers a comprehensive range of diagnostic services to doctors and researchers. We operate the largest CAP accredited Next Generation Sequencing (NGS) lab in Southeast Asia housing cutting-edge genome sequencing platforms. MedGenome is the founding member of GenomeAsia 100K, an initiative to sequence 100,000 Genomes in Asia. We have an exciting opportunity for the position of Senior Linux Administrator in Bengaluru location. It is a full time and work from office opportunity. Skills and Experience Required: 10+years of experience working in managing Linux physical servers (On Premise) (L3 Support). Expertise in Installation, configuration and upgrading of Microsoft SQL Server/MySQL/Oracle server software and related products. Extensive Knowledge in Installation, Configurations & Managing of Red Hat Enterprise and Ubuntu Servers. Manage all external/internal applications inclusive to DNS, LDAP, NIS, SAMBA, Apache, MySQL, PHP, Oracle. Perform ongoing performance tuning, hardware / firmware upgrades, and resource optimization as required. Configure CPU, memory, and disk partitions as required and ensure high availability of infrastructure. Server Hardening (Enforcing SELinux, Firewall configurations, access control with minimal privileges etc). Involvement in developing and overseeing the backup, replication, clustering and fail over strategies. Handson experience on cluster management like (Slurm, PBS etc). Choose and design the right infrastructure and understand vendor setups, data centres, etc. Maintenance and monitoring of Databases (MySQL, Oracle, PostgreSQL, MSSQL etc...) DevOps knowledge (Docker, Ansible, Jenkins, Chef and Puppet etc.) would be an added advantage. Educational Qualification: B.E/B.Tech in IT/CS or MCA If you are interested in this position, please click the APPLY NOW button for immediate employment consideration. We regret that due to volume of response, we can only contact initial successful applicants. If you have not heard from us within 7 days, then your application has been unsuccessful. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Oracle Database support experience. Minimum 5 years in Oracle DBA. Make sure they have hands on experience on Oracle RAC/ASM/Data Guard. Oracle Enterprise Linux, ZFS, Oracle database, Exadata and Super Cluster. Advanced knowledge of relational databases Oracle, SQL server and data modelling. Include assistance in following activities but not limited to. Installation and configuration. Patch and update installation. Product functionality guidance. Research setup issues and provide recommendations Oracle product clustering and Real Application Clusters (“RAC”) advice and guidance. Database and system partitioning. Configuration documentation and run books. Certification. OCP / OCA Preferred Technical And Professional Experience. Database and storage performance optimization. Change management and patching processes. Technology and software lifecycle guidance. Individual should be Graduate in IT background with a minimum of 5 + years of relevant experience working in a 24/7 environment. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Advisor, Systems Engineering What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What You Will Do Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What You Will Need To Have Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What Would Be Great To Have Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Role : Nutanix Exp : 3 To 7 yrs Location : Mumbai Mode of Interview: Virtual Vmware Vsphere, Nutanix AHV ,Active Directory, Windows Clustering, DNS Architecture , Powershell Scripting, File Servers, Hyper-V, MCSA, MCSE, MCTS, MCITP, VCP would be VM Skill,Nutanix Cetified Professional •Good Understanding of virtualization technology •Experience on VMware ESX 5.1 and above. •Experience in P2V, VMotion and VMware virtual center. •Good hands on command line of ESX •Good hands on Experience in diagnostics & troubleshooting of ESX server environment. •Good understanding of HA and DRS environment •Experience in performance tuning of VMWare servers and Virtual sessions and management of servers resources between virtual machines. •Experience in backup and recovery of Virtual machines and virtual servers. •Basic Knowledge of Storage Technologies and Networking •Readiness to work in 24x7 environment. Nutanix Skills Nutanix Hardware and Nutanix AHV virtualization platform Support preffered. Good Knowledge of Server hardware and blade centers,Nutanix Hardware Show more Show less

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Managing Oracle AIA PIPs (O2C and AABC PIP) ,Oracle SOA Suite and OSB 12c environments, performing installation, configuration, and clustering tasks, and monitoring and troubleshooting composites, pipelines, and integrations ,integration with Oracle Enterprise Manager (OEM). The administrator will oversee WebLogic domains, JMS resources, and security configurations while conducting health checks, performance tuning, and issue resolution for middleware systems. Documenting processes and adhering to best practices is also a key part of the role. Career Level - IC2 Responsibilities Install, configure, and administer Oracle AIA solutions, including O2C and AABC PIPs, ensuring proper integration with Oracle SOA Suite and Oracle OSB environments. Manage and maintain Oracle SOA/OSB Suite, Oracle AIA 12.x , and WebLogic Server environments, ensuring high availability, performance, and scalability. Monitor, troubleshoot, and resolve issues using Oracle Enterprise Manager (OEM) and other monitoring tools to ensure optimal system performance. Identify and resolve infrastructure-related issues in BPEL, ESB, and XML/XSLT workflows to maintain seamless integration across enterprise systems. Perform regular system upgrades and patches for Oracle AIA, SOA Suite, and WebLogic environments, ensuring minimal downtime and risk mitigation. Work closely with the development team to configure and deploy Oracle AIA PIPs, facilitating the integration of enterprise applications. Provide ongoing support and troubleshooting for AIA-related integrations and workflows, ensuring timely resolution of technical challenges. Document and enforce integration best practices, including change management, version control, and deployment procedures to ensure consistency and reliability across environments. Collaborate with cross-functional teams to understand business requirements and tailor AIA solutions to meet specific integration needs. Conduct periodic health checks and performance tuning for Oracle AIA and SOA environments to optimize system efficiency and response times. Ensure compliance with security standards, ensuring that Oracle AIA environments are secured and appropriately configured according to enterprise security policies. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Data Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. Job Details We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. We’re looking for an experienced Lead Data Scientist who will help us build predictive models and recommender systems using machine learning and statistical techniques to drive personalized marketing and customer experience. This Lead Data Scientist brings significant experience in designing, developing, and delivering statistical models and machine learning algorithms for targeting and digital optimization use cases on large-scale data sets in a cloud environment. They show rigor in how they prototype, test, and evaluate algorithm performance both in the testing phase of algorithm development and in managing production algorithms. They demonstrate advanced knowledge of machine learning and statistical techniques along with ensuring the ethical use of data in the algorithm design process. At Salesforce, Trust is our number one value and we expect all applications of statistical and machine learning models to adhere to our values and policies to ensure we balance business needs with responsible uses of technology. Responsibilities As part of the Customer Targeting Algorithms team within the Marketing AI/ML Algorithms & Applications organization, develop machine learning algorithms and statistical models to drive effective marketing and personalized customer experience - e.g., propensity models, uplift models, next-best recommender systems, customer lifetime value, etc. Own the full lifecycle of model development from ideation and data exploration, algorithm design, validation, and testing. Work closely with data engineers to develop modeling data sets and pipelines; deploy models in production, setup model monitoring and in-production tuning processes. Be a master in cross-functional collaboration by developing deep relationships with key partners across the company and coordinating with working teams. Collaborate with stakeholders to translate business requirements into technical specifications, and present data science solutions to technical and non-technical audiences technical and non-technical across the organization. Constantly learn, have a clear pulse on innovation across the enterprise SaaS, AdTech, paid media, data science, customer data, and analytics communities. Assume leadership responsibilities and cover the end-to-end data science solution outside of model development. This includes driving projects to completion with minimal supervision, engaging with stakeholders to quantify impact, and planning roadmaps for future enhancements. Work independently to manage stakeholder expectations and explore alternative use cases to get better return on investment from the suite of AI/ML models. Required Skills 8+ years of experience using advanced statistical and machine learning techniques such as clustering, linear and logistic regressions, PCA, gradient boosting machines (GBM), support vector machines (SVM), neural networks (e.g., ANN, RNN, CNN), and other deep learning algorithms (e.g., Wide & Deep). Must have multiple robust examples of using these techniques to support marketing efforts and to solve business problems on large-scale data sets. 8+ years of proficiency with one or more programming languages such as Python, R, PySpark, Java. Expert-level knowledge of SQL with strong data exploration and manipulation skills. Experience using cloud platforms such as GCP and AWS for model development and operationalization is preferred. Experience developing production-ready feature engineering scripts for model scoring deployment. Experience transforming semi-structured and unstructured data into features for model development. Experience creating model monitoring and model re-training frameworks to validate and optimize in-production performance. Must have superb quantitative reasoning and interpretation skills with strong ability to provide analysis-driven business insight and recommendations. Excellent written and verbal communication skills; ability to work well with peers and leaders across data science, marketing, and engineering organizations. Excellent presentation skills; ability to articulate data science solutions to a wide audience to drive model use and implementation adoption. Creative problem-solver who simplifies problems to their core elements. Experience with setting up endpoints, lambda functions, and API gateways is a plus. B2B customer data experience a big plus. Advanced Salesforce product knowledge is also a plus. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

AI and Machine Learning Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The AI and Machine Learning Internship is crafted to provide practical exposure to building intelligent systems, enabling interns to bridge theoretical knowledge with real-world applications. Role Overview: As an AI and Machine Learning Intern, you will work on projects involving data preprocessing, model development, and performance evaluation. This internship will strengthen your skills in algorithm design, model optimization, and deploying AI solutions to solve real-world problems. Key Responsibilities: Collect, clean, and preprocess datasets for training machine learning models Implement machine learning algorithms for classification, regression, and clustering Develop deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance using metrics such as accuracy, precision, and recall Collaborate on AI-driven projects, such as chatbots, recommendation engines, or prediction systems Document code, methodologies, and results for reproducibility and knowledge sharing Qualifications: Pursuing or recently completed a degree in Computer Science, Data Science, Artificial Intelligence, or a related field Strong foundation in Python and understanding of libraries such as Scikit-learn, NumPy, Pandas, and Matplotlib Familiarity with machine learning concepts like supervised and unsupervised learning Experience or interest in deep learning frameworks (TensorFlow, Keras, PyTorch) Good problem-solving skills and a passion for AI innovation Eagerness to learn and contribute to real-world ML applications Internship Benefits: Hands-on experience with real-world AI and ML projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of AI models and machine learning solutions Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description You are a strategic thinker passionate about driving solutions in “Data Science ”. You have found the right team. As a Data Science professional within our “ Asset Management team” , you will spend each day defining, refining and delivering set goals for our firm The Asset Management Data Science team is focused on enhancing and facilitating various steps in the investment process ranging from financial analysis and portfolio management to client services and advisory. You will utilize a large collection of textual data including financial documents, analyst reports, news, meeting notes and client communications along with more typical structured datasets. You will apply the latest methodologies to generate actionable insights to be directly consumed by our business partners. About Are you excited about using data science and machine learning to make a real impact in the asset management industry? Do you enjoy working with cutting-edge technologies and collaborating with a team of dedicated professionals? If so, the Data Science team at JP Morgan Asset Management could be the perfect fit for you. Here’s why: Real-World Impact: Your work will directly contribute to improving investment process and enhancing client experiences and operational process, making a tangible difference in our asset management business. Collaborative Environment: Join a team that values collaboration and teamwork. You’ll work closely with business stakeholders and technologists to develop and implement effective solutions. Continuous Learning: We support your professional growth by providing opportunities to learn and experiment with the latest data science and machine learning techniques. Job Responsibilities Collaborate with internal stakeholders to identify business needs and develop NLP/ML solutions that address client needs and drive transformation. Apply large language models (LLMs), machine learning (ML) techniques, and statistical analysis to enhance informed decision-making and improve workflow efficiency, which can be utilized across investment functions, client services, and operational process. Collect and curate datasets for model training and evaluation. Perform experiments using different model architectures and hyperparameters, determine appropriate objective functions and evaluation metrics, and run statistical analysis of results. Monitor and improve model performance through feedback and active learning. Collaborate with technology teams to deploy and scale the developed models in production. Deliver written, visual, and oral presentation of modeling results to business and technical stakeholders. Stay up-to-date with the latest research in LLM, ML and data science. Identify and leverage emerging techniques to drive ongoing enhancement. Required Qualifications, Capabilities, And Skills Advanced degree (MS or PhD) in a quantitative or technical discipline or significant practical experience in industry. Minimum of 8 years of experience in applying NLP, LLM and ML techniques in solving high-impact business problems, such as semantic search, information extraction, question answering, summarization, personalization, classification or forecasting. Advanced python programming skills with experience writing production quality code Good understanding of the foundational principles and practical implementations of ML algorithms such as clustering, decision trees, gradient descent etc. Hands-on experience with deep learning toolkits such as PyTorch, Transformers, HuggingFace. Strong knowledge of language models, prompt engineering, model finetuning, and domain adaptation. Familiarity with latest development in deep learning frameworks. Ability to communicate complex concepts and results to both technical and business audiences. Preferred Qualifications, Capabilities, And Skills Prior experience in an Asset Management line of business Exposure to distributed model training, and deployment Familiarity with techniques for model explainability and self validation About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals. Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We think you also hate when travel app is giving you a headache, right? A slight misinformation can ruin the trip. That is exactly what we are tackling as t-fam! Making sure that our 50+ million users have the best experience in crafting their own adventure. Your Main Duties In Flying With Us Design and Architect: design and architecture of MySQL, MongoDB, and PostgreSQL databases to meet scalability and performance requirements for complex applications. Advanced Troubleshooting: Utilize advanced troubleshooting skills to resolve database issues, including performance bottlenecks and data integrity problems. High Availability: Implement and manage high-availability and disaster recovery solutions for MySQL, MongoDB, and PostgreSQL, including replication, clustering, and failover strategies. Capacity Planning: Conduct in-depth capacity planning and forecasting to ensure database resources meet future growth and performance demands. Automation: Develop and maintain automation scripts and tools for database management tasks, including backups, monitoring, and provisioning. Performance Tuning: Perform comprehensive performance tuning, including query optimization, indexing strategies, and resource utilization adjustments. Security Management: Oversee database security, including access controls, encryption, and vulnerability assessments, ensuring compliance with industry standards and regulations. Documentation: Maintain detailed documentation for database configurations, procedures, and troubleshooting guides to support operational excellence. Collaboration: Collaborate with application developers, system administrators, and other stakeholders to ensure seamless integration and performance of databases. Mentorship: Provide guidance and mentorship to junior DBAs and other team members, sharing expertise and best practices for database management. Mandatory Belongings That You Must Prepare A Bachelor's degree in Computer Science, IT, or a related field; a Master’s degree is highly desirable. 6+ years of experience working with databases, including MySQL, MongoDB, and PostgreSQL, with a proven track record in a senior or lead DBA role. Expertise in advanced SQL query writing, performance tuning, and database optimization techniques for MySQL and PostgreSQL. In-depth experience with cloud environments (Google Cloud) and database services such Google Cloud SQL etc. Proficiency in a programming language such as Bash and Python, with the ability to develop custom scripts and automation tools. Extensive experience with MySQL, MongoDB, PostgreSQL, and familiarity with other NoSQL databases like Redis, Apache Cassandra is a plus. Demonstrated leadership skills, including the ability to manage complex projects, drive initiatives, and mentor junior staff. Excellent communication skills with the ability to articulate technical concepts to non-technical stakeholders and lead cross-functional teams. Strong analytical and problem-solving abilities to address complex database challenges. A proactive approach to adopting new technologies and methodologies to improve database management practices. A reliable team player with a collaborative mindset and a strong desire to contribute to the team’s success and continuous improvement. In the event that you haven’t received any updates after 3 weeks, your data will be kept and we may contact you for another career destination. Meanwhile, discover more about tiket.com on Instagram , LinkedIn , or YouTube . Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We think you also hate when travel app is giving you a headache, right? A slight misinformation can ruin the trip. That is exactly what we are tackling as t-fam! Making sure that our 50+ million users have the best experience in crafting their own adventure. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Your Main Duties In Flying With Us Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. Develop custom data models and algorithms to apply to data sets. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes. Develop company A/B testing framework and test model quality. Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and impact to business. Mandatory Belongings That You Must Prepare Min. 8years of experience in data analysis and modeling A Bachelor in Computer Science, Statistics, Mathematics, or another quantitative field. Strong problem solving skills with an emphasis on product development. Proficient in programming skills (R, Python, SQL, etc.) for data acquisition, processing and analysis from large data sets. Knowledge of data science concepts (regression, classification, clustering, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Knowledge of a variety of machine learning techniques (Random forest, SVM, artificial neural networks, etc.) and their real-world advantages/drawbacks. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. In the event that you haven’t received any updates after 3 weeks, your data will be kept and we may contact you for another career destination. Meanwhile, discover more about tiket.com on Instagram , LinkedIn , or YouTube . Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Senior Software Engineer We’re building the next-generation AI storage system, catering the needs for massive scale AI factories to meet unique demands of modern GenAI era. The modernization will deliver unparalleled performance, immense value and exceptional experiences for our Customers by modernizing and scaling the stack through development of several advanced technologies spanning storage management, memory management, clustering, Filesystems, distributed systems and performance for our next-gen software-defined storage platform. Most importantly, we’re modernizing with AI to accelerate our execution to streamline and standardize processes and reimagining work and customer experiences. Join us to do the best work of your career and make a profound social impact as a Senior Software Engineer on our Software Engineering Team in Bangalore . What You’ll Achieve As a Senior Software Engineer, you will be part of the Protocols development team to innovate and deliver next-gen high-performance NFS solution. Apart from developing protocol support, you will also focus on automation. You will collaborate and work closely with our Global engineering talents and will have significant opportunities to innovate and modernize the next-gen storage platform. You will: Design, develop and deliver protocols support for next-gen AI storage platform. Contribute to the design and architecture of high-quality, complex systems and software/storage environments Prepare, review and evaluate software/storage specifications for products and systems Contribute to the development and implementation of test strategies for complex software products and systems/for storage products and systems Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Hands-on coding experience in C/C++, Python Experience in automation and Continuous Development / Continuous Integration processes. Understanding of concurrency and synchronization Networking and storage troubleshooting skills.Knowledge in Linux, Kernel, VFS/NFS Strong Object-oriented design, data structures and algorithms knowledge Agile-based development experiences Desirable Requirements 5+ years of related experience. Bachelor’s or Master’s degree in computer science or related field Understanding of Filesystem internals, distributed systems architecture, memory management will be a plus. Who We Are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 30th March 2025 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here. "#NJP" Job ID: R263259 Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Note If shortlisted, we’ll contact you via WhatsApp and email. Please monitor both and respond promptly. This role is located in Hyderabad. Candidates willing to relocate are welcome to apply. Location: Hyderabad Work Mode: Work From Office Salary: ₹13,00,000 – ₹22,00,000 INR Joining Time / Notice Period: Immediate – 30 Days About The Client – A top-tier tech consulting firm specializing in data engineering, AI, and automation. With deep expertise in digital transformation and cloud solutions, the company helps businesses make smarter, data-driven decisions and optimize operations. Job Purpose Seeking an experienced and detail-oriented Data Engineer to join a growing data engineering team. This role involves building and optimizing scalable ELT pipelines using Snowflake and dbt, working on cloud data architecture, and collaborating with analysts, architects, and other engineers to deliver validated, business-ready datasets. Key Responsibilities Build and maintain ELT pipelines using dbt on Snowflake Migrate and optimize SAP Data Services (SAP DS) jobs to cloud-native platforms Design and manage layered data architectures (staging, intermediate, mart) Apply performance tuning techniques like clustering, partitioning, and query optimization Use orchestration tools such as dbt Cloud, Airflow, or Control-M Develop modular SQL, write tests, and follow Git-based CI/CD workflows Collaborate with data analysts/scientists to gather requirements and document solutions Contribute to knowledge sharing through reusable dbt components and Agile ceremonies Must-Have Skills 3–10 years of Data Engineering experience Strong hands-on with Snowflake, dbt, SQL, and Azure Data Lake Basic proficiency in Python for scripting and automation Experience with SAP DS for legacy system integration Understanding of data modeling (preferably dimensional/Kimball) Familiarity with RBAC, GDPR, and data privacy best practices Git-based version control and CI/CD exposure Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Job Type Full-time Description About the role CloudBees, the leader in CI/CD (Continuous Integration and Delivery) and the company behind Jenkins, is seeking an experienced QE Engineer. CloudBees Flow and CloudBees Core are complex products which span technologies ranging from distributed systems, clustering, databases, multi-thread processing, complex scheduling and much more. In this role, the individual is expected to develop and maintain automated tests using Playwright, Selenide & other test frameworks to ensure comprehensive test coverage. Executing various types of testing of the CD-RO product to ensure the scalability and performance requirements of our customers, while collaborating in Agile environments. Perform both automated (50%) and manual (20%) testing, as well as various other DevOps activities to supporting QA process (10%) ? Load/Stress testing of the Web applications and backend ? Create and update test scripts, reports and other test documentation ? Collaborate with product management, support and engineering teams in order to establish priorities, understand requirements, formulate test plans and execute them accordingly ? Work with customer support teams to debug customer trouble tickets and reproduce them when necessary. Additional responsibilities include debugging test failures, tracking issues in Jira, and integrating tests into CI/CD pipelines The CloudBees Flow team places high value on quality with the expectation that the QE team serves as the ultimate gatekeeper for certifying the Release Readiness of the software. The ideal candidate is expected to think outside the box in terms of striking the right balance between automation, test coverage (functionality as well as performance of the software) and the speed of testing. What You’ll Do Be part of a 2 week Sprint executing payloads in lockstep with Developers. ? Develop a germane understanding of the plugin assembly line process, starting from design to delivery and follow them. Where necessary improve existing processes and become advocates for delivering top quality software. ? Work as an independent contributor collaborating with a team of developers and other test engineers. ? Collaborate with product management, support and engineering teams in order to establish priorities Requirements Role Requirements Bachelor’s degree in Computer Science, Engineering or related field. 10+ years of experience with automation testing frameworks. 2+ years of experience with performance testing tool as JMeter or Gatling. Experience with CI/CD processes and tools such as: Jenkins, GitLab, TeamCity. Experience with Java, Groovy or Kotlin . Experience with at least one of the ?loud computing services: Google Cloud Platform, Amazon Web Services or Microsoft Azure. Experience with command line interfaces on Linux. Experience with Kubernetes and Helm. Experience with UI & API test automation tools, particularly in Java stack (e.g. RoadRunner, JMeter, Gatling, Selenium, Playwright, API testing). Working experience with databases setup and managing such as Oracle, MySQL, MS SQL, MariaDB, PostgreSQL Nice to haves Previous experience with Test Management/ Reporting Tools. Previous experience with bash scripting and other command line interfaces on Linux, MacOS and Windows Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. Our team is a dynamic group of professionals who are passionate about technology and innovation. We thrive in a collaborative, inclusive environment where every voice is valued. Together, we strive to deliver world-class solutions and outstanding service to our clients, ensuring their travel experiences are seamless and enjoyable. We’re seeking a DevOps Engineer to join our team and work on a dynamic suite of platforms that service enterprise products and platforms. We are looking for someone who can adapt to recent technologies, thrive in a dynamic environment, and deliver positive outcomes for the company and our clients. If you are as passionate about technology as we are, we want to hear from you! What You’ll Do On a Typical Day Work in a SCRUM team Design, develop and test new CI/CD or DevOps pipelines for application teams Performing administration and operations of overall Red Hat OpenShift solutions for customers, including development of solution designs, implementation plans and documentation Onboard applications to enterprise DevOps and container platforms Design and Develop IT automation & monitoring solutions for business applications. Assesses the provided automation architecture and or proof of concept and selects best methods for implementation of said architecture and creates appropriate automated tests to validate functionality. Evaluate and implement orchestration, automation, and tooling solutions to ensure consistent processes and repetitive tasks are performed with the highest level of accuracy and reduced defects Analyze platform usage metrics to determine if platform needs expansion and plan proactively engineering tasks as needed. Analyze and resolve technical and application / platform problems Collaborate and work alongside fellow engineers, designers, and other partners to develop and maintain applications / platforms Participate in the evolution and maintenance of existing systems Propose new functional and/or technical product improvements Experiment with new and emerging technologies, tools and platforms. What We’re Looking For 5+ years of experience in DevOps and Container platform engineering Bachelor's or master's degree in computer science or STEM 3 or more years experience and good knowledge on Jenkins, GitHub Actions with demonstrated skills in creating CI/CD pipelines. 3 or more years experience with Docker containerization and clustering (Kubernetes, Docker, Helm, Open Shift, EKS experience). Knowledge on YML with ability to create Docker Files for different environments and resources. Administering source code (GitHub/GitLab, etc.) & artifact/packages/images management (Nexus/JFrog, etc.) tools. Having knowledge on security scanning & DevSecOps SAST, DAST, SCA tools (Snyk, Sonatype, GitLab, Mend & etc.) Hands-on experience in provisioning Infrastructure as Code (IaC). Experience with Linux, Automation, scripting (ansible, bash, groovy). Interest in learning and mastering new technologies. Passion for excellence in platform engineering, DevSecOps, and building enterprise platforms. Curiosity and passion for problem solving Proficient in English Inclusive, collaborative, and able to work seamlessly with a multicultural and international team Bonus if you have Experience in AWS Knowledge of accessibility (WCAG) Knowledge of the travel industry Technical Skills You’ll Develop Jenkins, GitHub, GitHub Actions, Nexus/JFrog, Sonarqube, Veracode Kubernetes, Red Hat Openshift, EKS, Docker, Podman Ansible, bash, groovy DevSecOps - Snyk, Sonatype, Wiz etc. Terraform, Cloud-formation, Chef, Puppet, Python New Relic, ELK (Elasticsearch, Logstash, Kibana), Amplitude Analytics Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

CryptoChakra is a leading cryptocurrency analytics and education platform dedicated to democratizing access to digital asset markets through cutting-edge technology and user-centric learning. By harnessing machine learning algorithms, real-time blockchain intelligence, and predictive analytics, we deliver hyper-accurate price forecasts, risk assessment frameworks, and actionable market insights. Our platform bridges innovation with education, offering curated tutorials and interactive tools that empower traders, investors, and enthusiasts to navigate the crypto ecosystem with confidence. Committed to advancing financial literacy, we specialize in predictive modeling, sentiment analysis, and DeFi analytics to shape the future of decentralized finance. Position: Blockchain Data Analyst Intern (Fresher) Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s analytics team to decode on-chain data and uncover patterns driving crypto markets. This role offers hands-on experience in blockchain analytics, predictive modeling, and data-driven storytelling, ideal for candidates eager to merge technical skills with fintech innovation. Key Responsibilities On-Chain Analysis: Investigate transactional data, wallet activity, and smart contract interactions using tools like Etherscan or Dune Analytics. Predictive Modeling: Assist in developing ML models (TensorFlow/PyTorch) to forecast market trends based on blockchain metrics. Sentiment Analysis: Scrape and analyze social/media data (Reddit, Twitter) to gauge market sentiment. Data Visualization: Create dashboards (Tableau, Power BI) to translate complex metrics into actionable insights for educational content. DeFi Research: Explore decentralized protocols (e.g., Uniswap, Aave) to assess liquidity trends and governance dynamics. Qualifications Technical Skills Foundational proficiency in Python/R for data manipulation (Pandas, NumPy). Basic understanding of SQL/NoSQL databases and statistical concepts (regression, clustering). Familiarity with blockchain explorers (Etherscan) or crypto APIs (CoinGecko, Binance) is a plus. Professional Competencies Analytical mindset to identify trends in unstructured datasets. Strong communication skills for cross-team collaboration in remote settings. Self-motivated with the ability to prioritize tasks and meet deadlines. Preferred Qualifications Academic projects involving data analysis, machine learning, or blockchain technology. Exposure to tools like TensorFlow, AWS, or big data frameworks (Spark). Curiosity about DeFi, NFTs, or tokenomics. Academic Background Pursuing or recently completed a degree in Data Science, Statistics, Computer Science, Economics, or a related field. What We Offer Flexible Compensation: Paid opportunities for candidates with relevant technical skills; unpaid roles for skill-building enthusiasts. Mentorship: Guidance from senior analysts and blockchain experts. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Role Overview: Join for one of our top customer as a skilled AI Engineer you will design, develop, and deploy machine learning models and systems that drive our products and enhance user experiences. You will work closely with cross-functional teams to implement cutting-edge AI solutions, including recommendation engines and large language models. Key Responsibilities: Design and implement robust machine learning models and algorithms, focusing on recommendation systems. Conduct data analysis to identify trends, insights, and opportunities for model improvement. Collaborate with data scientists and software engineers to build and integrate end-to-end machine learning systems. Optimize and fine-tune models for performance and scalability, ensuring seamless deployment. Work with large datasets using SQL and Postgres to support model training and evaluation. Implement and refine prompt engineering techniques for large language models (LLMs). Stay current with advancements in AI/ML technologies, particularly in core ML algorithms like clustering and community detection. Monitor model performance, conduct regular evaluations, and retrain models as needed. Document processes, model performance metrics, and technical specifications. Required Skills and Qualifications: Bachelors or Master’s degree in Computer Science, Data Science, or a related field. Strong expertise in Python and experience with machine learning libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Proven experience with SQL and Postgres for data manipulation and analysis. Demonstrated experience building and deploying recommendation engines. Solid understanding of core machine learning algorithms, including clustering and community detection. Prior experience in building end-to-end machine learning systems. Familiarity with prompt engineering and working with large language models (LLMs). Experience working with near-real-time recommendation systems Any graph databases hands-on experience like Neo4j, Neptune, etc Experience in Flask or Fast API frameworks Experience with SQL to write/modify/understand the existing queries and optimize DB connections Experience with AWS services like ECS, EC2, S3, Cloudwatch Preferred Qualifications: Experience with Graph DB (specifically Neo4J and cypher query language) Knowledge of large-scale data handling and optimization techniques. Experience with Improving models with RLHF Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Established in 1806 as a small soap and candle business in New York City, Colgate-Palmolive is now a truly global company with products sold in over 200 countries and territories under such internationally recognized brand names as Colgate, Palmolive, Softsoap, Irish Spring, Protex, Sorriso, Kolynos, elmex, Tom's of Maine, Sanex, Ajax, Axion, Soupline, Haci Sakir, Suavitel, PCA SKIN, EltaMD, Filorga and Hello as well as Hill's Science Diet and Hill's Prescription Diet. Colgate-Palmolive is a leading consumer products company that serves hundreds of millions of consumers worldwide with brands and products across four core businesses – Oral Care, Personal Care, Home Care and Pet Nutrition. We are committed to offering products that make lives healthier and more enjoyable, and programs that enrich communities around the world. Every day millions of people trust our products to care for themselves and the ones they love. Our goal is to use our technology to create products that will continue to improve the quality of life for our consumers wherever they live. A career at Colgate-Palmolive is an excellent opportunity if you seek a global experience, constant challenge, and development opportunities in an environment that respects work/life effectiveness. Job Title: Sr. Specialist, Data Science Travel Required?: No Travel Date: May 24, 2025 Remote Relocation Assistance Offered Within Country Job Number #165399 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has supplied to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the GLOBAL ANALYTICS team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The GLOBAL DATA SCIENCE & sophisticated ANALYTICS vertical in Colgate Palmolive is focused on working on reasons which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead GLOBAL DATA SCIENCE & ADVANCED ANALYTICS projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Effectiveness, Forecasting etc. Data Scientist needs to handle relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who are you… You are a function expert - Leads GLOBAL DATA SCIENCE & ADVANCED ANALYTICS within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built Deploys models using Airflow, Docker on Google Cloud Platforms You connect the dots - Merge multiple data sources and build Statistical Models / Machine Learning models in Price and Promo Elasticity Modeling, Marketing Mix Modeling to derive actionable business insights and recommendation Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need BE/ BTECH [ Computer Science, Information Technology is preferred ], MBA or PGDM in Business Analytics / Data Science, Additional DS Certifications or Courses, MSC / MSTAT in Economics or Statistics 5+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as linear regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering, Bayesian Regression etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Strong Understanding of Cloud Frameworks Google Cloud, Snowflake and services like Kubernetes, Cloud Build, Cloud Run. Knowledge of using GitHub, Airflow for coding and model executions and model deployment on cloud platforms Working knowledge on tools like Looker, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What you’ll need…(Preferred) Handling, redefining, and developing statistical models for RGM/Pricing and/or Marketing Effectiveness Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Solid understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/presenting data for partners using: Looker, DOMO, pydash, plotly, d3.js, ggplot2, pydash, streamlit etc Willingness and ability to experiment with new tools and techniques Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. #LI-Remote

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: Data Science Lead Requisition ID: 55339 City: Chennai Country/Region: IN Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Mandatory Skills Data Science, ML, DL, Python for Data Science, Tensorflow, Pytorch, Django, SQL, MLOps Preferred Skills NLP, Gen AI, LLM, PowerBI, Advanced Analytics, Banking exposure ͏ Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Experienced data scientist who can using python build various AI models for banking product acquisition, deepening, retention. Drive data driven personalisation, customer segmentation, in accordance with banks data privacy and security standards Expert in applying ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines & Standards Working knowledge ane experience in ML Ops is a must and engineering background is preferred Excellent command of data warehousing concepts and SQL Knowledge of personal banking products is a plus Mandatory Skills: AI Cognitive . Experience: 8-10 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: AI/ML Engineer - 5 to 10 yrs Requisition ID: 57885 City: Chennai Country/Region: IN Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Mandatory Skills Data Science, ML, DL, NLP or Computer Vision, Python, Tensorflow, Pytorch, Django, PostgreSQL Preferred Skills Gen AI, LLM, RAG, Lanchain, Vector DB, Azure Cloud, MLOps, Banking exposure ͏ Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipro’s Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc ͏ Mandatory Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Ability to work with large datasets and knowledge of data preprocessing techniques. Strong Backend Python developer Experience in applying machine learning techniques, Natural Language Processing or Computer Vision using TensorFlow, Pytorch Build and deploy end to end ML models and leverage metrics to support predictions, recommendations, search, and growth strategies Expert in applying ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines. ͏ Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Mandatory Skills: Generative AI . Experience: 8-10 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 5.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary We are seeking a highly skilled Technical Lead with 4 to 8 years of experience in Oracle and Oracle Database Administration. The ideal candidate will oversee database management ensure optimal performance and provide technical guidance to the team. This hybrid role requires rotational shifts and offers an opportunity to make a significant impact on our companys success. Responsibilities Lead the database management team to ensure optimal performance and reliability. Oversee the installation configuration and maintenance of Oracle databases. Provide technical guidance and support to team members on Oracle Database Administration. Monitor database performance and implement improvements as needed. Develop and maintain database backup and recovery procedures. Ensure data security and compliance with company policies and regulations. Collaborate with other teams to integrate database solutions with applications. Troubleshoot and resolve database issues in a timely manner. Create and maintain documentation for database configurations and procedures. Conduct regular database audits to ensure data integrity and accuracy. Implement and manage database replication and clustering. Stay updated with the latest Oracle technologies and best practices. Participate in rotational shifts to provide 24/7 database support. Qualifications Possess a strong background in Oracle and Oracle Database Administration. Demonstrate excellent problem-solving and troubleshooting skills. Have experience with database backup and recovery procedures. Show proficiency in database performance tuning and optimization. Exhibit knowledge of data security and compliance standards. Have the ability to work collaboratively with cross-functional teams. Display strong communication and documentation skills. Be adaptable to rotational shifts and hybrid work model. Stay current with emerging Oracle technologies and industry trends.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies