Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Associate AI/ML Scientist – Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief In this role as an AI/ML Scientist on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. You should be able to design, develop, and implement machine learning models, conduct deep data analysis, and support decision-making with data-driven insights. Responsibilities include building and validating predictive models, supporting experiment design, and integrating advanced techniques like transformers, GANs, and reinforcement learning into scalable production systems. The role requires solving complex problems using NLP, deep learning, optimization, and computer vision. You should be comfortable working independently, writing reliable code with automated tests, and contributing to debugging and refinement. You’ll also document your methods and results clearly and collaborate with cross-functional teams to deliver high-impact AI/ML solutions that align with business objectives and user needs. What I'll be doing – your accountabilities? Design, develop, and implement machine learning models, conduct in-depth data analysis, and support decision-making with data-driven insights Develop predictive models and validate their effectiveness Support the design of experiments to validate and compare multiple machine learning approaches Research and implement cutting-edge techniques (e.g., transformers, GANs, reinforcement learning) and integrate models into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative models, develop algorithms, or optimize workflows for data-driven tasks Independently apply data-driven solutions to ambiguous problems, leveraging tools like Natural Language Processing, deep learning frameworks, machine learning, optimization methods and computer vision frameworks Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Write and integrate automated tests alongside their models or code to ensure reproducibility, scalability, and alignment with established quality standards Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Mastered Data Analysis and Data Science concepts and can demonstrate this skill in complex scenarios AI & Machine Learning, Programming and Statistical Analysis Skills beyond the fundamentals and can demonstrate the skills in most situations without guidance. Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance: Data Validation and Testing Model Deployment Machine Learning Pipelines Deep Learning Natural Language Processing (NPL) Optimization & Scientific Computing Decision Modelling and Risk Analysis. To understand fundamentals and can demonstrate this skill in common scenarios with guidance: Technical Documentation. Qualifications & Requirements Bachelor's degree in B.E/BTech, preferably in computer science Experience with collaborative development workflow: IDE (Integrated Development Environment), Version control(github), CI/CD (e.g. automated tests in github actions) Communicate effectively with technical and non-technical audiences with experience in stakeholder management Structured, highly analytical mind-set and excellent problem-solving skills; Self-starter, highly motivated & Willing to share knowledge and work as a team. An individual who respects the opinion of others; yet can drive a decision though the team; Preferred Experiences 5+ years of years of relevant experience in the field of Data Engineering 3+ years of hands-on experience with Apache Spark, Python and SQL Experience working with large datasets and big data technologies to train and evaluate machine learning models. Experience with containerization: Kubernetes & Docker Expertise in building cloud native applications and data pipelines (Azure, Databricks, AWS, GCP) C Experience with common dashboarding and API technologies (PowerBI, Superset, Flask, FastAPI, etc As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Full Stack Developer Job Description We are seeking a highly skilled and motivated Full Stack Developer to join our dynamic team. The ideal candidate will have a strong background in both front-end and back-end development, with expertise in the following areas: Key Responsibilities Develop and maintain web applications using the Angular framework (HTML, CSS, etc.) and JavaScript. Implement object-oriented programming principles in both front-end and back-end development. Design and develop Python-based backend services, including working with JSON objects and Python Flask servers. Ensure secure and efficient communication using HTTP and HTTPS protocols, including handling HTTPS requests and responses. Implement and manage self-signed certificates for secure web pages. Design and develop responsive user interfaces for mobile, tablet, and desktop devices, ensuring screen auto-resizing and optimal user experience. Create and manage packages for Angular projects. Develop and integrate REST APIs. Configure and manage Nginx servers for web application deployment. Collaborate with UI/UX designers to translate Figma designs into functional web pages. Utilize tools like Swagger for API documentation and testing. Optimize web pages for improved screen responsiveness and performance. Conduct automation testing of web pages using frameworks such as Robot or any other Framework. Implement webpage tokenization, security, and encryption techniques. Utilize browser-based developer tools for debugging and optimizing web applications on Android and Safari. Work with various servers, including Apache, IIS, and others. Integrate multi-language support into web applications, including languages such as Chinese and Spanish. Ability to work on Windows and Linux based operating system Knowledge of RDMS or any other databases Qualifications Proven experience as a Full Stack Developer or similar role. Proficiency in Angular framework, JavaScript, and object-oriented programming. Strong knowledge of Python backend development and Flask server. Experience with HTTP/HTTPS protocols and secure communication. Familiarity with self-signed certificates and their implementation. Expertise in responsive UI design for various devices. Experience with RESTful API development and integration. Knowledge of Nginx server configuration and management. Understanding of Figma and ability to translate designs into code. Familiarity with Swagger or similar API documentation tools. Experience with automation testing frameworks. Strong understanding of web security, tokenization, and encryption. Proficiency with browser-based developer tools. Experience with different server technologies (Apache, IIS, etc.). Ability to integrate multi-language support into web applications. Preferred Skills Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Ability to work in a fast-paced and dynamic environment.
Posted 1 week ago
1.0 - 3.0 years
2 - 6 Lacs
Mumbai
Work from Office
About this role About this role Are you interested in building innovative technology that crafts the financial marketsDo you like working at the speed of a startup, and solving some of the world s most exciting challengesDo you want to work with, and learn from, hands-on leaders in technology and finance At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $ 11 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering You will be working on BlackRocks investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Job Scope Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. D evelop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Contribute to quality code reviews, unit, regression and user acceptance testing, dev ops and level one production support. Skills and Experience: 2-5 years of proven experience in Java development is preferred A proven foundation in core Java and related technologies, with OO skills and design patterns Good hands-on object-oriented programming knowledge in Java, Spring, Microservices. Good knowledge of Open-Source technology stack (Spring, Hibernate, Maven, JUnit, etc. ). Experience with relational database and/or NoSQL Database (e. g. , Apache Cassandra) Great analytical, problem-solving and communication skills Nice to have and opportunities to learn: Exposure to building microservices and APIs ideally with REST, Kafka or gRPC . Experience working in an agile development team or on open-source development projects. Experience with optimization, algorithms or related quantitative processes. Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Qualifications: B. E. / B. TECH. / MCA or any other relevant engineering degree from a reputed university. Our benefits . Our hybrid work model BlackRock s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www. linkedin. com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Position:- Tech Lead (Java) Job Location- Delhi Working days -5.5 Exp- 5+ years Key Responsibilities · Develop and maintain backend services using Core Java and Spring Boot · Design and implement RESTful APIs and microservices architecture · Integrate server-side logic with frontend components · Optimize application performance, scalability, and reliability · Conduct unit testing and participate in code reviews · Troubleshoot and resolve technical issues · Collaborate with DevOps for CI/CD pipeline integration · Maintain documentation for code, APIs, and system architecture. · Proven track record of leading development teams, mentoring junior developers · Experience in Agile/Scrum environments and sprint planning · Ability to translate business requirements into technical solutions Technical skills · Strong proficiency in Core Java and Spring Boot · Experience with RESTful APIs, JSON, and HTTP protocols · Familiarity with SQL databases (e.g., MySQL, PostgreSQL ,Oracle) · Knowledge of version control systems like Git · Understanding of OOP principles and design patterns · Exposure to microservices, Docker, or Kubernetes is a plus · Excellent problem-solving and debugging skills Tools · JBoss, Apache Tomcat · Java, spring Framework good to have. Type of Project · Experience in Government project development is good to have. · Experience in Customization development through java. · Experience in Custom portal development is required
Posted 1 week ago
0.0 - 6.0 years
10 - 11 Lacs
Chennai
Work from Office
Cloud Engineer will be a part of the Engineering team and will require a strong knowledge of application monitoring, infrastructure monitoring, automation, maintenance, and Service Reliability Improvements. Specifically, we are searching for someone who brings fresh ideas, demonstrates a unique and informed viewpoint, and enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences at every interaction. Proven work experience in designing, deploying and operating mid to large scale public cloud environments. Proven work experience in provisioning Infrastructure as Code (IaC) using Terraform Enterprise or community edition. Proven work experience in writing custom terraform providers/plug-ins with Sentinel Policy as Code Proven work experience in containerisation via Docker Good to have strong working experience in Virtualisation via Kubernetes (image building, k8s schedule) Experience in package, config and deployment management via Helm, Kustomize, ArgoCD. Strong knowledge in Github, DevOps (Tekton / GCP Cloud Build is an advantage) Should be proficient in scripting and coding, that include traditional languages like Java, Python, GoLang, JS and Node. js. Proven working experience in Messaging Middleware - Apache Kafka, RabbitMQ, Apache ActiveMQ Proven working experience in API gateway, Apigee is an advantage. Proven working experience in API development, REST. Proven working experience in Sec and IAM, SSL/TLS, OAuth and JWT. Extensive knowledge and hands-on experience in Grafana and Prometheus micro libraries. Experience in self hosted private / public cloud setup. Exposure to Cloud Monitoring and logging. Experience with distributed storage technologies like NFS, HDFS, Ceph, S3 as well as dynamic resource management frameworks (Mesos, Kubernetes, Yarn) Experience with automation tools should be a priority Professional Certification is an advantage Public Cloud > > GCP is a good to have. Preferred Qualifications Previous success in Cloud Engineering, DevSecOps. Must have 5+ experience DevSecOPs Must have minimum 3+ experience in cloud Engineering Design, automate and manage a highly available and scalable cloud deployment that allows development teams to deploy and run their services. Collaborating with engineering and Architects teams to evaluate and identify optimal cloud solutions, also leveraging scalability, high-performance and security. Modernise existing on-prem solution and improving existing systems. Extensively automated deployments and managed applications in GCP. Developing and maintaining cloud solutions in accordance with best practices. Ensuring efficient functioning of data storage and processing functions in accordance with company security policies and best practices in cloud security. Collaborate with Engineering teams to identify optimization strategies, help develop self-healing capabilities Designing and architecting middleware solutions that align with the overall system architecture and meet business requirements. This involves selecting the appropriate middleware technologies and patterns for seamless integration. Writing code and configuring middleware components to enable communication and data flow between various systems. This includes developing APIs, message queues, and other middleware services. Integrating different applications and services using middleware technologies, ensuring they can communicate effectively and exchange data in a standardized manner. Identifying and resolving issues related to middleware, such as communication failures, performance bottlenecks, or data inconsistencies. Experience in developing a strong observability capabilities Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues. Regularly reviewing existing systems and making recommendations for improvements.
Posted 1 week ago
11.0 - 19.0 years
40 - 45 Lacs
Pune
Work from Office
Join us as a Technical Delivery Lead at Barclays, where youll take part in the evolution of our digital landscape, driving innovation and excellence. Youll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. Youll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Technical Delivery Lead you should have experience with: Scala (core), Spark, Hive, SQL, Snowflakes, AWS. Experience in tools like Apache Airflow, Gitlab, SBT, Maven. Experience with cloud adoption and microservice architecture is mandatory. Familiarity with Risk, Finance, and Treasury systems is highly desirable. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L Listen and be authentic, E Energise and inspire, A Align across the enterprise, D Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc). to solve problems creatively and effectively. Communicate complex information. Complex information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge and Drive the operating manual for how we behave.
Posted 1 week ago
5.0 - 9.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Primary Skill Set: Data Engineering, Python,Pyspark,Cloud (AWS/GCP), SCALA. Primary Skill: Snowflake, Cloud (AWS, GCP), SCALA, Python, Spark, Big Data and SQL. QUALIFICATION: Bachelors or masters degree JOB RESPONSIBILITY: strong development experience in Snowflake, Cloud (AWS, GCP), SCALA, Python, Spark, Big Data and SQL. Work closely with stakeholders, including product managers and designers, to align technical solutions with business goals. Maintain code quality through reviews and make architectural decisions that impact scalability and performance. Performs Root cause Analysis for any critical defects and address technical challenges, optimize workflows, and resolve issues efficiently. Expert in Agile, Waterfall Program/Project mplementation. Manages strategic and tactical relationships with program stakeholders. Successfully executing projects within strict deadlines while managing intense pressure. Good understanding of SDLC (Software Development Life Cycle) Identify potential technical risks and implement mitigation strategies Excellent verbal, written, and interpersonal communication abilities, coupled with strong problem-solving, facilitation, and analytical skills. Cloud Management Activities – To have a good understanding of the cloud architecture /containerization and application management on AWS and Kubernetes, to have in
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for an experienced Search Developer skilled in Java and Apache SOLR to design, develop, and maintain high-performance, scalable search solutions for enterprise or consumer-facing applications. The ideal candidate will work closely with cross-functional teams to optimize search relevance, speed, and reliability while handling large, complex datasets. Essential Functions Key Responsibilities Design, implement, and optimize search applications and services using Java and Apache SOLR. Develop and maintain SOLR schemas, configurations, indexing pipelines, and query optimization for datasets often exceeding 100 million documents. Build and enhance scalable RESTful APIs and microservices around search functionalities. Work with business analysts and stakeholders to gather search requirements and improve user experience through advanced search features such as faceting, filtering, and relevance tuning. Perform SOLR cluster management, including sharding, replication, scaling, and backup/recovery operations. Monitor application performance, troubleshoot issues, and implement fixes to ensure system stability and responsiveness. Integrate SOLR with relational and NoSQL databases, streaming platforms, and ETL processes. Participate in code reviews, adopt CI/CD processes, and contribute to architectural decisions. Stay updated with latest developments in SOLR, Java frameworks, and search technologies. Qualifications Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. 7+ years of hands-on experience in Java development, including frameworks like Spring and Hibernate. 3+ years of solid experience working with Apache SOLR, including SOLRCloud, schema design, indexing, query parsing, and search tuning. Strong knowledge of search technologies (Lucene, Solr) and experience managing large-scale search infrastructures. Experience in RESTful API design and microservices architecture. Familiarity with SQL and NoSQL databases. Ability to write efficient, multi-threaded, and distributed system code. Strong problem-solving skills and debugging expertise. Experience with version control (Git), build tools (Maven/Gradle), and CI/CD pipelines (Jenkins, GitHub Actions). Understanding of Agile/Scrum software development methodologies. Excellent communication skills and ability to collaborate with cross-functional teams. Would be a plus Preferred Skills Experience with other search platforms like Elasticsearch is a plus. Knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker, Kubernetes). Familiarity with streaming platforms such as Kafka. Exposure to analytics and machine learning for search relevance enhancement. Prior experience in large-scale consumer web or e-commerce search applications. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About Us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
Posted 1 week ago
2.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
We are looking for a Full Stack Developer to produce scalable software solutions for our company. As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility RESPONSIBILITIES: Work IT team to ideate software solutions majorly on CRM Design requirement-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Configuring technical APIs Write technical documentation REQUIREMENTS: Should have knowledge on hosting & dedicated server hosting (GoDaddy & Google/Cloud applications) Proven experience as a Full Stack Developer or similar role Experience developing desktop and (mobile applications is an added advantage) Familiarity with common stacks Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery) Hands on experience in PHP Knowledge of multiple back-end languages JavaScript frameworks (e.g., React JS) Familiarity with databases (e.g., MySQL, SQL), web servers (e.g. Apache) and UI/UX design Excellent communication and teamwork skills Great attention to detail Organizational skills An analytical mind
Posted 1 week ago
0.0 - 2.0 years
10 - 14 Lacs
Bengaluru
Work from Office
We are looking for a Full Stack Developer to produce scalable software solutions. Youll be part of a cross-functional team thats responsible for the full software development life cycle, from conception to deployment. As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility. You should be familiar with Agile methodologies. Responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Build features and applications with a mobile responsive design Write technical documentation Work with data scientists and analysts to improve software Requirements and Skills Proven experience as a Full Stack Developer or similar role Experience developing desktop and mobile applications Familiarity with common stacks Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery) Knowledge of multiple back-end languages (e.g. C#, Java, Python) and JavaScript frameworks (e.g. Angular, React, Node.js) Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache) and UI/UX design Excellent communication and teamwork skills Great attention to detail Organizational skills An analytical mind Degree in Computer Science, Statistics or relevant field
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a Seasoned Developer who is passionate in writing clean and efficient code, building scalable systems, driving engineering excellence in a fast-paced, Agile environment. This role is ideal for developers with deep hands-on experience in Java and Apache Spark, combined with a strong foundation in object-oriented design principles. Responsibilities: Perform detailed impact analysis for code changes with understanding of dependencies across the application components Design and develop scalable, high-performance code using Java and Bigdata / JavaSpark Write high-quality, maintainable code that is modular, testable, and adheres to SOLID principles and industry-standard design patterns. Write robust unit tests using JUnit, with a focus on code coverage, business logic, readability and reusability. Perform code reviews to ensure the code follows clean design / architecture and best engineering practices. Operate in an environment of ownership and accountability, where quality and collaboration are core values Mentor a junior developers and guide them through technical challenges Work in a cross-functional Agile team, participating in daily stand-ups, sprint planning, retrospectives, and backlog grooming. Translate user stories into technical tasks and drive timely, high-quality delivery of solutions. Collaborate closely with Senior developers, Architects, Quality Engineers, DevOps, and Product owners to deliver high-quality code at speed Qualifications: 8+ years of development experience with hands-on experience in Java, Bigdata / Spark, object-oriented programming (OOP) Experience with REST APIs, RDBMS database, and Kafka messaging systems Exposure to microservices architecture and containerization tools (Docker, Kubernetes) Proven experience leading teams and mentoring developers in a fast-paced development environment. Strong understanding of software development lifecycle (SDLC) and Agile methodologies Excellent communication skills and ability to work effectively in cross-functional teams Education: Bachelor's degree/University degree or equivalent experience Master’s degree preferred ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Description: Citi is embarking on a multi-year technology initiative in Wealth Tech Banking & Payment Technology Space. In this Journey, we are looking for a highly motivated hands-on senior developer. We are building the platform, which supports various Messaging, API, and Workflow Components for Banking and Payment Services across the bank. Solution will be built from the scratch using latest technologies. The candidate will be a core member of the technology team responsible for implementing projects based on Java, Spring Boot, Kafka using latest technologies. Excellent opportunity to immerse in and learn within the wealth tech banking division and gain exposure to business and technology initiatives targeted to maintain lead position among its competitors. We work in a Hybrid-Agile Environment. The Apps Development Senior Programmer Analyst is responsible for leading the team in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Responsibilities: People Manager - Write good quality code in Java, Sprint Boot (related stack) Well versed with JUnit, Mockito, Integration Tests and Performance Tests Individual Contributor – Write good quality code in Java Angular JS 16 Well versed with UI/UX Designs, Unit test using Jest Ability to design, develop components with minimal assistance Ability to effectively interact, collaborate with development team Ability to effectively communicate development progress to the Project Lead Work with developers onshore, offshore and matrix teams to implement a business solution Write user/supported documentation Evaluate and adopt new dev tools, libraries, and approaches to improve delivery quality Perform peer code review of project codebase changes Acts as SME to senior stakeholders and /or other team members Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Skills Required: Deep Knowledge of Spring including Spring Framework, Spring Boot, Spring Security, Spring Web, Spring Data Deep Knowledge of: Threading, Collections, Exception Handling, JDBC, Java OOD/OOP Concepts, GoF Design Patterns, MoM and SOA Design Patterns, File I/O, and parsing XML and JSON, delimited files and fixed length files, String matching, parsing, building, working with binary data / byte arrays. Good Knowledge of UI/UX Design and Angular JS and Jest for unit testing Good knowledge of SQL (DB2/Oracle dialect is preferable) Good knowledge of building and deploy application running in Kubernetes and Docker Experience working with SOA & Micro-services utilizing REST. Experience with design and implementations of cloud-ready applications and deployment pipelines on large-scale container platform clusters is a plus Experience working in a Continuous Integration and Continuous Delivery environment and familiar with Tekton, Harness, Jenkins, Code Quality, etc. Knowledge in industry standard best practices such as Design Patterns, Coding Standards, Coding modularity, Prototypes etc. Apply depth of analytical understanding of a variety of New ways of working such as problem solving, Extreme programming, Behavior Driven Development, DevOps. Experience in debugging, tuning and optimizing components Understanding of the SDLC lifecycle for Agile methodologies Excellent written and oral communication skills Experience developing application in Financial Services industry is preferred Nice to have experience : Messaging Systems: IBM MQ, Kafka, RabbitMQ, ActiveMQ, Tibco. etc. Tomcat, Jetty, Apache HTTPD Able to work with build/configure/deploy automation tools, Jenkin, Light Speed. etc Linux Ecosystem Autosys APIm APM Tools: Dynatrace, AppDynamics, etc. Caching Technologies: Redis, Hazelcast, MemCached. etc Qualifications: 10+ years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking to fill this opportunity for one of leading financial domain client. Position: Big Data Developer (Apache spark) Location: Pune (Hybrid) Experience: 6 – 9 years Job Description: True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred. Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Description We are seeking a highly skilled Performance Testing Engineer with strong expertise in Apache JMeter to join our QA team. The ideal candidate will be responsible not only for designing and executing performance tests but also for gathering performance requirements from stakeholders to ensure systems meet expected load, responsiveness, and scalability criteria. Requirements 4-6 years’ experience in software performance testing and engineering Are you ready to work on world changing technologies? Today, organizations need to move with increased agility and insight to grow and thrive. Boomi is one of the hottest tech companies in the SaaS/Cloud industry, named a Leader for the eighth year in a row in the Gartner Enterprise iPaaS Magic Quadrant and recently recognized by Inc. Magazine as one of the best workplaces. Our award-winning, patented technology is transforming the world of integration by making enterprise-class integration technology accessible and affordable to companies of all sizes. Boomi provides the foundation on which your business can evolve and innovate. According to a recent survey by Vanson Bourne, connected businesses are far outpacing their competitors. We help organizations connect everything and engage everywhere across any channel, device or platform. More than 7,000 organizations are using Boomi to run better, faster and smarter. Working at Boomi means doing what you love. We hire trailblazers with an entrepreneurial spirit who can solve challenging problems, make a real impact in technology and want to build something big. If you are passionate about solving hard problems, enjoy working with world-class people and developing cutting edge technology, you should explore a career with Boomi. Learn more at http://www.boomi.com/ or visit Boomi Careers. Join us as a Performance Engineer on our Performance, Scalability and Resiliency(PSR) Engineering team in Bangalore/Hyderabad, India to do the best work of your career and make a profound social impact. What you’ll achieve As a Performance Engineer, you will be responsible for validating and recommending performance optimizations in Boomi’s computing infrastructure and software. You will work with our Product Development and Site Reliability Engineering teams on Performance monitoring, tuning and tooling. You will: Analyze Software Architecture (monolith and micro-service) and identify potential areas of performance, scalability and resiliency improvements Work closely with architects in capacity planning, validation and benchmarking for any new microservices being implemented. Identify KPIs, perform trending and analysis, identify patterns and engineer remedial solutions for a high performant, fault tolerant and resilient platform and application stack. Design, automate and perform scalability and resiliency tests using various tools like blazemeter, Neoload, JMeter, Chaos Monkey/Gremlin Use observability stack to improve diagnosability and trending around Performance bottlenecks. Identify performance tuning opportunities and recommend remedial solutions Take the first step towards your dream career Every Boomer brings something unique to the table. Here’s what we are looking for with this role: Job responsibilities Essential Requirements Expert in performance engineering fundamentals – arrival rate, workload models, responsiveness, computing resource utilization, time complexity, scalability, resiliency etc.. Expert in monitoring the performance using native Linux OS, Application Performance Management(APM) and Infrastructure monitoring tools Expertise in understanding all AWS services to analyze infrastructure bottlenecks Well versed with using NewRelic for APM and infrastructure monitoring Good hands on experience with Splunk to query the application logs and create dashboards for deeper troubleshooting Experience in analyzing heap dump, thread dump, SQL slow query log and identify performance bottlenecks Expert in recommending optimal resource configurations in Cloud, Virtual Machine, Container and Container Orchestration technologies Flexibility to work in a remote and geographically distributed team environment Desirable Requirements Experience in writing data extraction and custom monitoring tools using any programming language – Java, Python, R , Bash or similar Experience in capacity planning and modeling using AI/ML, queueing models or similar approaches Performance tuning experience in Java or similar application code What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Title: Senior Application Developer Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Track Alert The UPS Track Alert API aims to enhance the tracking experience for high-volume customers and third parties, while reducing the overall load on the Track API, and monetizing our tracking services. The goal of Track Alert API is to reduce unnecessary burden on our system, while giving our customers’ ability to receive status updates on the small packages quickly and accurately. Track Alert API benefits are to enhance customer experience, operational efficiency, data-driven decision making, optimizing cash flow through near real-time delivery tracking, and mitigating fraud and theft through near real-time package status monitoring. About The Role The Senior Applications Developer provides input and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of applications software, etc.) for business critical UPS.com Track Visibility applications. Candidate must have strong analytical and problem-solving skills. He/She collaborates with teams and supports emerging technologies and must have strong verbal and written communication skills. The ideal candidate should have extensive experience in designing, developing, and deploying scalable web applications while adhering to the SAFe Agile methodology using Azure DevOps. Key Responsibilities Collaborate with cross-functional teams to design, develop, and maintain Java Spring Boot based Restful Web Services. Design and implement microservices-based solutions for high scalability and maintainability. Develop and maintain OCP4 and GCP hosted solutions, ensuring high availability and security. Participate in SAFe Agile ceremonies including PI planning, daily stand-ups, and retrospectives. Utilize Azure DevOps for CI/CD pipeline setup, version control, and automated deployments. Perform code reviews, ensure coding standards, and mentor junior developers. Troubleshoot and resolve complex technical issues across frontend and backend systems. Primary Skills Backend: Java, Spring Boot, Apache Camel, Java Messaging Service, NoSQL Database, JSON, and XML Cloud: OpenShift, Google Cloud Platform DevOps & CI/CD: Azure DevOps – Pipelines, Repos, Boards Architecture & Design Patterns: Restful Web Service Client/Server Development Microservices Architecture Object Oriented Analysis and Design Messaging Queue and Pub/Sub architecture Secondary Skills Testing: Unit Testing (xUnit, NUnit) and Integration Testing. Cucumber JMeter API Management: RESTful API design and development. API Gateway, OAuth, OpenAPI/Swagger. Security & Performance: Application performance optimization and monitoring. Methodologies: SAFe Agile Framework – Familiarity with PI Planning, Iterations, and Agile ceremonies. Tools & Collaboration: Git, IntelliJ or Eclipse Collaboration tools like Microsoft Teams. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Proven experience in client and server-side web service development. Strong understanding of cloud-native application design, especially on GCP. Excellent problem-solving skills and the ability to lead technical discussions. Nice To Have Exposure to containerization technologies (Docker, Kubernetes). Google Cloud Platform services: Google Cloud Storage, Big Table, Pub/Sub Knowledge of Code Quality Inspection Tools, Dependency Management Systems and Software Vulnerability Detection and Remediation Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. About The Team You will be part of a dynamic and collaborative team of passionate developers, architects, and product owners dedicated to building high-performance web applications. Our team values innovation, continuous learning, and agile best practices. We work closely using the SAFe Agile framework and foster an inclusive environment where everyone's ideas are valued. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 1 week ago
3.0 years
0 Lacs
Greater Chennai Area
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team If you thrive on tackling significant technical challenges, delivering scalable solutions for mission-critical platforms, and collaborating closely with world-class engineers, you will love being a part of our Technology Product Management team! You'll help to build the foundational services that power Workday's enterprise cloud, impacting millions of users globally. About The Role We’re looking for a Technical Product Manager who is deeply curious about complex distributed systems with a track record of driving innovation within established platforms. Above all, we are seeking a Product Manager who excels at driving technical strategy, making astute trade-offs that balance innovation with system stability, and translating complex technical requirements into actionable, engineering-ready roadmaps. Experience with AWS and data storage and retrieval technologies like Apache Parquet, and Apache Iceberg is a plus! If you are a natural collaborator and a great storyteller, capable of working seamlessly with senior engineering leaders and architects around the world, and love diving deep into the intricate details of distributed system design and implementation, we strongly encourage you to apply! About You Basic Qualifications 3+ years experience of technical product management. A college degree in Computer Science or an equivalent technical degree; or at least 5 years of proven experience at a software company in product management or a similar role Other Qualifications Always brings data-informed arguments to the forefront with SQL and Python-based data analysis Can get software developers to enthusiastically build on top of your product Flexible and adaptable to adapt to change Can design scalable, reliable, business-critical systems for large customers Experience with distributed processing and scheduling; indexing and search technologies; devops-related initiatives to improve developer experience, automation, and operational stability; or system health and monitoring Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: L2 Support Engineer – IT Infrastructure, Cloud & Web/App Server Location: Chennai, India Experience: 5 to 12 Years Job Type: Hybrid Industry: IT Services / Infrastructure Management / Cloud Support Job Summary We are seeking a skilled and experienced L2 Support Engineer to join our IT operations team in Chennai. The ideal candidate will have strong experience in server and network support, web/app server troubleshooting (Apache, NGINX, IIS, Tomcat, WebLogic, JBoss), cloud services, scripting, and ITSM tools like ServiceNow or BMC Remedy. Key Responsibilities Provide L2 support for network, server, and cloud infrastructure issues. Perform installation, configuration, and troubleshooting of web servers (Apache, NGINX, IIS) and application servers (Tomcat, WebLogic, JBoss). Monitor and manage system and application performance across multiple environments. Handle incidents, service requests, and changes using ITSM tools (ServiceNow, BMC Remedy). Collaborate with cross-functional teams for issue resolution and escalations. Execute server patching, maintenance, and upgrade tasks. Develop and maintain automation scripts using Python, Bash, or PowerShell. Generate system performance and health reports and assist with root cause analysis. Ensure compliance with IT policies, standards, and security guidelines. Required Skills & Qualifications 5–12 years of experience in L2 IT Support, Infrastructure Management, or Technical Operations. Strong expertise in web servers (Apache, NGINX, IIS) and app servers (Tomcat, WebLogic, JBoss). Solid understanding of networking fundamentals, TCP/IP, DNS, load balancers, and firewalls. Hands-on experience with cloud platforms (AWS, Azure, or GCP) – basic provisioning, monitoring, and troubleshooting. Experience with scripting languages (Python, Bash, PowerShell) for automation and troubleshooting. Proficiency in ITSM tools like ServiceNow, BMC Remedy, or equivalent. Excellent problem-solving and communication skills. Willingness to work in shifts or on-call rotation if required. Preferred Qualifications Relevant certifications: AWS/Azure Fundamentals, RHCE, ITIL v4, or similar. Familiarity with monitoring tools like Nagios, Zabbix, or Prometheus/Grafana. Basic knowledge of container technologies (Docker, Kubernetes) is a plus.
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Enphase Energy is a global energy technology company and a leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, our innovative microinverter technology revolutionized solar power, making it a safer, more reliable, and scalable energy source. Today, the Enphase Energy System enables users to make, use, save, and sell their own power. Enphase is also one of the most successful and innovative clean energy companies in the world, with more than 80 million products shipped across 160 countries. Join our dynamic teams designing and developing next-gen energy technologies and help drive a sustainable future! About The Role The Sr. Data Scientist will be responsible for analyzing product performance in the fleet. Provides support for the data management activities of the Quality/Customer Service organization. Collaborates with Engineering/Quality/CS teams and Information Technology. What You Will be doing Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders 6+ years of experience in data science with a significant focus on predictive maintenance and anomaly detection Who You Are And What You Bring Bachelor’s or Master’s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately
Posted 1 week ago
0 years
0 Lacs
Telangana, India
On-site
Overview Job Summary: We are seeking a highly skilled Databricks Platform Operations Engineer to join our team, responsible for daily monitoring and resolution of data load issues, platform optimization, capacity planning, and governance management. This role is pivotal in ensuring the stability, scalability, and security of our Databricks environment while acting as a technical architect for platform best practices. The ideal candidate will bring a strong operational background, potentially with earlier experience as a Linux, Hadoop, or Spark administrator, and possess deep expertise in managing cloud-based data platforms. Databricks Operations Engineer Location: Hyderabad/Bangalore Shift: 24x7 Work Mode: Work from Office Responsibilities Key Responsibilities: Primary Responsibility: Data Load Monitoring & Issue Resolution Monitor data ingestion and processing dashboards daily to identify, diagnose, and resolve data load and pipeline issues promptly. Act as the primary responder to data pipeline failures, collaborating with data engineering teams for rapid troubleshooting and remediation. Ensure data availability, reliability, and integrity through proactive incident management and validation. Maintain detailed logs and reports on data load performance and incident resolution. Platform Optimization & Capacity Planning Continuously optimize Databricks cluster configurations, job execution, and resource allocation for cost efficiency and performance. Conduct capacity planning to anticipate future resource needs and scaling requirements based on workload trends. Analyze platform usage patterns and recommend infrastructure enhancements to support business growth. Databricks Governance & Security Implement and enforce data governance policies within Databricks, including access control, data lineage, and compliance standards. Manage user permissions and roles using Azure AD, AWS IAM, or equivalent systems to uphold security and governance best practices. Collaborate with security and compliance teams to ensure adherence to organizational policies and regulatory requirements. Technical Architecture & Collaboration Serve as a Databricks platform architect, providing guidance on environment setup, best practices, and integration with other data systems. Work closely with data engineers, data scientists, governance teams, and business stakeholders to align platform capabilities with organizational goals. Develop and maintain comprehensive documentation covering platform architecture, operational procedures, and governance frameworks. Operational Excellence & Automation Troubleshoot and resolve platform and job-related issues in collaboration with internal teams and Databricks support. Automate routine administrative and monitoring tasks using scripting languages (Python, Bash, PowerShell) and infrastructure-as-code tools (Terraform, ARM templates). Participate in on-call rotations and incident management processes to ensure continuous platform availability. Requirements Required Qualifications: Experience in administering Databricks or comparable cloud-based big data platforms. Experience with Jenkins Scripting/ Pipeline Scripting Demonstrated experience in daily monitoring and troubleshooting of data pipelines and load processes. Strong expertise in Databricks platform optimization, capacity planning, governance, and architecture. Background experience as Linux Administrator, Hadoop Administrator, or Spark Administrator is highly desirable. Proficiency with cloud platforms (Azure, AWS, or GCP) and their integration with Databricks. Experience managing user access and permissions with Azure Active Directory, AWS IAM, or similar identity management tools. Solid understanding of data governance principles, including RBAC, data lineage, security, and compliance. Proficient in scripting languages such as Python, Bash, or PowerShell for automation and operational tasks. Excellent troubleshooting, problem-solving, communication, and collaboration skills. Preferred Skills: Experience with infrastructure-as-code tools like Terraform or ARM templates. Familiarity with data catalog and governance tools such as Azure Purview. Working knowledge of Apache Spark and SQL to support platform administration and governance monitoring. Experience designing and implementing data lakehouse architectures.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Sanctity AI is a Netherlands-based startup founded by an IIT alum, specializing in ethical, safe, and impactful artificial intelligence. Our agile team is deeply focused on critical areas like AI alignment, responsible LLM training, prompt orchestration, and advanced agent infrastructure. In a landscape where many talk ethics, we build and deploy solutions that genuinely embody ethical AI principles. Sanctity AI is positioned at the forefront of solving real-world alignment challenges, shaping the future of trustworthy artificial intelligence. We leverage proprietary algorithms, rigorous ethical frameworks, and cutting-edge research to deliver AI solutions with unparalleled transparency, robustness, and societal impact. Sanctity AI represents a rare opportunity in the rapidly evolving AI ecosystem, committed to sustainable innovation and genuine human-AI harmony. The Role As an AI ML Intern reporting directly to the founder, you’ll go beyond just coding. You’ll own whole pipelines—from data wrangling to deploying cutting-edge ML models in production. You’ll also get hands-on experience with large language models (LLMs), prompt engineering, semantic search, and retrieval-augmented generation. Whether it’s spinning up APIs in FastAPI, containerizing solutions with Docker, or exploring vector and graph databases like Pinecone and Neo4j, you’ll be right at the heart of our AI innovation. What You’ll Tackle Data to Insights: Dive into heaps of raw data, and turn it into actionable insights that shape real decisions. Model Building & Deployment: Use Scikit-learn, XGBoost, LightGBM, and advanced deep learning frameworks (TensorFlow, PyTorch, Keras) to develop state-of-the-art models. Then, push them to production—scaling on AWS, GCP, or other cloud platforms. LLM & Prompt Engineering: Fine-tune and optimize large language models. Experiment with prompt strategies and incorporate RAG (Retrieval-Augmented Generation) for more insightful outputs. Vector & Graph Databases: Implement solutions using Pinecone, Neo4j, or similar technologies for advanced search and data relationships. Microservices & Big Data: Leverage FastAPI (or similar frameworks) to build robust APIs. If you love large-scale data processing, dabble in Apache Spark, Hadoop, or Kafka to handle the heavy lifting. Iterative Improvement: Observe model performance, gather metrics, and keep refining until the results shine. Who You Are Python Pro: You write clean, efficient Python code using libraries like Pandas, NumPy, and Scikit-learn. Passionate About AI/ML: You’ve got a solid grasp of algorithms and can’t wait to explore deep learning or advanced NLP. LLM Enthusiast: You’re familiar with training or fine-tuning large language models and love the challenge of prompt engineering. Cloud & Containers Savvy: You’ve at least toyed with AWS, GCP, or similar, and have some experience with Docker or other containerization tools. Data-Driven & Detail-Oriented: You enjoy unearthing insights in noisy datasets and take pride in well-documented, maintainable code. Curious & Ethical: You believe AI should be built responsibly and love learning about new ways to do it better. Languages: You can fluently communicate complex technical ideas in English. Fluency in Dutch, Spanish or French is a plus. Math Wizard: You have a strong grip on Advanced Mathematics and Statistical modeling. This is a core requirement. Why Join Us? Real-World Impact: Your work will address real world and industry challenges—problems that genuinely need AI solutions. Mentorship & Growth: Team up daily with founders and seasoned AI pros, accelerating your learning and skill-building. Experimentation Culture: We encourage big ideas and bold experimentation. Want to try a new approach? Do it. Leadership Path: Show us your passion and skills, and you could move into a core founding team member role, shaping our future trajectory. Interested? Send over your résumé, GitHub repos, or any project links that showcase your passion and talent. We can’t wait to see how you think, build, and innovate. Let’s team up to create AI that isn’t just powerful—but also responsibly built for everyone.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities JOB DESCRIPTION Demonstrate a deep knowledge of, and ability to operationalize, leading data technologies and best practice Partner end-to-end with Product Managers and Data Scientists to understand customer requirements and design prototypes and bring ideas to production We develop real products. You need to be an expert in design, coding, and scripting Facilitate problem diagnosis and resolution in technical and functional areas Encourage change, especially in support of data engineering best practices and developer satisfaction Write high-quality code that is consistent with our standards, creating new standards as necessary Demonstrate correctness with pragmatic automated tests Review the work of other engineers in a collegial fashion to promote and improve quality and engineering practices Develop strong working relationships with others across levels and functions Participate in, and potentially coordinate, Communities-of-Practice in those technologies in which you have an interest Participate in continuing education programs to grow your skills both technically and in the Williams-Sonoma business domain Serve as a member of an agile engineering team and participate in the team's workflow Criteria 5 years of experience as a professional software engineer 3 - 5 years of experience with big data technologies Experience in building, distributed, scalable, and reliable data pipelines that ingest and process data at scale and in batch and real-time Strong knowledge of programming languages/tools including Spark, SQL, Python, Java, Scala, Hive, and Elasticsearch Experience with streaming technologies such as Spark streaming, Flink, or Apache Beam Experience with various messaging systems such as Kafka Experience in implementing Lambda Architecture Working experience with various SQL and NoSQL databases such as Snowflake, Cassandra, HBase, MongoDB, and/or Couchbase Working experience with various time-series databases such as OpenTSDB and Apache Druid Familiarity with ML and Deep Learning Working knowledge of various columnar storage such as Parquet, Kudu, and ORC An understanding of software development best practice Enthusiasm for constant improvement as a Data Engineer Ability to review and critique code and proposed designs, and offer thoughtful feedback in a collegial fashion Skilled in writing and presenting -- able to craft needed messages so they are clearly expressed and easily understood Ability to work independently on complex problems of varying complexity and scope Bachelor's degree in Computer Science, Engineering or equivalent work experience About Us Founded in 1956, Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Today, Williams-Sonoma, Inc. is one of the United States' largest e-commerce retailers with some of the best known and most beloved brands in home furnishings. Our family of brands are Williams-Sonoma, Pottery Barn, Pottery Barn Kids, Pottery Barn Teen, West Elm, Williams-Sonoma Home, Rejuvenation, GreenRow and Mark and Graham. We currently operate retail stores globally. Our products are also available to customers through our catalogs and online worldwide. Williams-Sonoma has established a technology center in Pune, India to enhance its global operations. The India Technology Center serves as a critical hub for innovation and focuses on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. By integrating advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Job Description Data and Analytics Services, the Lead Analytics Consultant is responsible for developing innovative visual analytics solutions and enabling faster and better decision making for Sun Life Our growing mandate to deliver Data Analytics, Artificial Intelligence and Data Solutions requires an experienced data visualization practitioner to accelerate the development of our strategic analytics projects in support of our business stakeholders Preferred Skills 5 years+ of experience in: Tableau development experience and designing dashboards / decision enablement tools Tableau Desktop and Server platforms SQL Programming, PL/SQL Developing rich portfolio of design uses cases demonstrating excellent user experience Working in an agile development environment including rapid prototyping during sprints Qualifications Minimum graduate degree in Mathematics, Computer Science, Engineering or equivalent Responsibilities Visual design expertise with a solid understand of the best practices around dashboard and visual perception. End-to-End development experience and ability to create complex calculations including Level of Detail (LOD) expressions, action filters, user filters and implement advanced dashboard practices in Tableau Using blending, sorting, sets and maps in Tableau Leveraging Tableau Prep for data organization and scheduling Ability to understand data modeling based on user specs Experience tuning the performance of Tableau Server dashboards to optimize client experience Designing custom landing pages for Tableau Server for enhanced user experience Ability to work independently, manage engagements or parts of large engagements directly with business partners. Solid understanding of how to consolidate and transform data to meaningful and actionable information Ability to draw out meaningful business insights by synthesizing information from multiple sources Additional Skills Tableau desktop and server certification Experience with R/ Python/ Matlab/ SAS Experience with Agile or design-driven development of analytics applications Experience developing Tableau Extensions using Tableau Extensions API, jQuery Apache Tomcat Webserver, and or PHP Exposure of Data Analytics projects including predictive modeling, data mining & statistical analysis Job Category: Advanced Analytics Posting End Date: 30/07/2025
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Title: Senior Application Developer Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Track Alert The UPS Track Alert API aims to enhance the tracking experience for high-volume customers and third parties, while reducing the overall load on the Track API, and monetizing our tracking services. The goal of Track Alert API is to reduce unnecessary burden on our system, while giving our customers’ ability to receive status updates on the small packages quickly and accurately. Track Alert API benefits are to enhance customer experience, operational efficiency, data-driven decision making, optimizing cash flow through near real-time delivery tracking, and mitigating fraud and theft through near real-time package status monitoring. About The Role The Senior Applications Developer provides input and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of applications software, etc.) for business critical UPS.com Track Visibility applications. Candidate must have strong analytical and problem-solving skills. He/She collaborates with teams and supports emerging technologies and must have strong verbal and written communication skills. The ideal candidate should have extensive experience in designing, developing, and deploying scalable web applications while adhering to the SAFe Agile methodology using Azure DevOps. Key Responsibilities Collaborate with cross-functional teams to design, develop, and maintain Java Spring Boot based Restful Web Services. Design and implement microservices-based solutions for high scalability and maintainability. Develop and maintain OCP4 and GCP hosted solutions, ensuring high availability and security. Participate in SAFe Agile ceremonies including PI planning, daily stand-ups, and retrospectives. Utilize Azure DevOps for CI/CD pipeline setup, version control, and automated deployments. Perform code reviews, ensure coding standards, and mentor junior developers. Troubleshoot and resolve complex technical issues across frontend and backend systems. Primary Skills Backend: Java, Spring Boot, Apache Camel, Java Messaging Service, NoSQL Database, JSON, and XML Cloud: OpenShift, Google Cloud Platform DevOps & CI/CD: Azure DevOps – Pipelines, Repos, Boards Architecture & Design Patterns: Restful Web Service Client/Server Development Microservices Architecture Object Oriented Analysis and Design Messaging Queue and Pub/Sub architecture Secondary Skills Testing: Unit Testing (xUnit, NUnit) and Integration Testing. Cucumber JMeter API Management: RESTful API design and development. API Gateway, OAuth, OpenAPI/Swagger. Security & Performance: Application performance optimization and monitoring. Methodologies: SAFe Agile Framework – Familiarity with PI Planning, Iterations, and Agile ceremonies. Tools & Collaboration: Git, IntelliJ or Eclipse Collaboration tools like Microsoft Teams. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Proven experience in client and server-side web service development. Strong understanding of cloud-native application design, especially on GCP. Excellent problem-solving skills and the ability to lead technical discussions. Nice To Have Exposure to containerization technologies (Docker, Kubernetes). Google Cloud Platform services: Google Cloud Storage, Big Table, Pub/Sub Knowledge of Code Quality Inspection Tools, Dependency Management Systems and Software Vulnerability Detection and Remediation Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. About The Team You will be part of a dynamic and collaborative team of passionate developers, architects, and product owners dedicated to building high-performance web applications. Our team values innovation, continuous learning, and agile best practices. We work closely using the SAFe Agile framework and foster an inclusive environment where everyone's ideas are valued. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 1 week ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Business Area: Professional Services Seniority Level: Mid-Senior level Job Description: At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world’s largest enterprises. Team Description Cloudera is seeking a Solutions Consultant to join its APAC Professional Services team. In this role you’ll have the opportunity to develop massively scalable solutions to solve complex data problems using CDP, NiFi, Spark and related Big Data technology. This role is a client facing opportunity that combines consulting skills with deep technical design and development in the Big Data space. This role will present the successful candidate the opportunity to work across multiple industries and large customer organizations. As the Solution Consultant you will : Work directly with customers to implement Big Data solutions at scale using the Cloudera Data Platform and Cloudera Dataflow Design and implement CDP platform architectures and configurations for customers Perform platform installation and upgrades for advanced secured cluster configurations Analyze complex distributed production deployments, and make recommendations to optimize performance Able to document and present complex architectures for the customers technical teams Work closely with Cloudera’ teams at all levels to help ensure the success of project consulting engagements with customer Drive projects with customers to successful completion Write and produce technical documentation, blogs and knowledgebase articles Participate in the pre-and post- sales process, helping both the sales and product teams to interpret customers’ requirements Keep current with the Hadoop Big Data ecosystem technologies Work in different timezone We’re excited about you if you have: 6+ years in Information Technology and System Architecture experience 4+ years of Professional Services (customer facing) experience architecting large scale storage, data center and /or globally distributed solutions 3+ years designing and deploying 3 tier architectures or large-scale Hadoop solutions Ability to understand big data use-cases and recommend standard design patterns commonly used in Hadoop-based and streaming data deployments. Knowledge of the data management eco-system including: Concepts of data warehousing, ETL, data integration, etc. Ability to understand and translate customer requirements into technical requirements Experience implementing data transformation and processing solutions Experience designing data queries against data in the HDFS environment using tools such as Apache Hive Experience setting up multi-node Hadoop clusters Experience in configuring security configurations (LDAP/AD, Kerberos/SPNEGO) Experience in Cloudera Software and/or HDP Certification (HDPCA / HDPCD) is a plus Strong experience implementing software and/or solutions in the enterprise Linux environment Strong understanding with various enterprise security solutions such as LDAP and/or Kerberos Strong understanding of network configuration, devices, protocols, speeds and optimizations Strong understanding of the Java ecosystem including debugging, logging, monitoring and profiling tools Excellent verbal and written communications What you can expect from us: Generous PTO Policy Support work life balance with Unplugged Days Flexible WFH Policy Mental & Physical Wellness programs Phone and Internet Reimbursement program Access to Continued Career Development Comprehensive Benefits and Competitive Packages Paid Volunteer Time Employee Resource Groups EEO/VEVRAA
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France