Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Intuit is seeking a Sr Data Scientist to join our Data sciences in Intuit India. The AI team develops game changing technologies and experiments that redefine and disrupt our current product offerings. You’ll be building and prototyping algorithms and applications on top of the collective financial data of 100 million consumers and small businesses. Applications will span multiple business lines, including personal finance, small business accounting, and tax. You thrive on ambiguity and will enjoy the frequent pivoting that’s part of the exploration. Your team will be very small and team members frequently wear multiple hats. In this position you will have close collaboration with the engineering and design teams, as well as the product and data teams in business units. Your role will range from research experimentalist to technology innovator to consultative business facilitator. You must be comfortable partnering with those directly involved with big data infrastructure, software, and data warehousing, as well as product management. What you'll bring MS or PhD in an appropriate technology field (Computer Science, Statistics, Applied Math, Operations Research, etc.). 2+ years of experience with data science for PhD and 5+ years for Masters. Experience in modern advanced analytical tools and programming languages such as R or Python with scikit-learn. Efficient in SQL, Hive, or SparkSQL, etc. Comfortable in Linux environment Experience in data mining algorithms and statistical modeling techniques such as clustering, classification, regression, decision trees, neural nets, support vector machines, anomaly detection, recommender systems, sequential pattern discovery, and text mining. Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences Preferred Additional Experience Apache Spark The Hadoop ecosystem Java HP Vertica TensorFlow, reinforcement learning Ensemble Methods, Deep Learning, and other topics in the Machine Learning community Familiarity with GenAI and other LLM and DL methods How you will lead Perform hands-on data analysis and modeling with huge data sets. Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms. Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products. Discover data sources, get access to them, import them, clean them up, and make them “model-ready”. You need to be willing and able to do your own ETL. Create and refine features from the underlying data. You’ll enjoy developing just enough subject matter expertise to have an intuition about what features might make your model perform better, and then you’ll lather, rinse and repeat. Run regular A/B tests, gather data, perform statistical analysis, draw conclusions on the impact of your optimizations and communicate results to peers and leaders. Explore new design or technology shifts in order to determine how they might connect with the customer benefits we wish to deliver.
Posted 2 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - ETL Developer - Informatica BDM/DEI 📍 Location : Onsite 🕒 Employment Type : Full Time 💼 Experience Level : Mid Senior Job Summary - We are seeking a skilled and results-driven ETL Developer with strong experience in Informatica BDM (Big Data Management) or Informatica DEI (Data Engineering Integration) to design and implement scalable, high-performance data integration solutions. The ideal candidate will work on large-scale data projects involving structured and unstructured data, and contribute to the development of reliable and efficient ETL pipelines across modern big data environments. Key Responsibilities Design, develop, and maintain ETL pipelines using Informatica BDM/DEI for batch and real-time data integration Integrate data from diverse sources including relational databases, flat files, cloud storage, and big data platforms such as Hive and Spark Translate business and technical requirements into mapping specifications and transformation logic Optimize mappings, workflows , and job executions to ensure high performance, scalability, and reliability Conduct unit testing and participate in integration and system testing Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver robust solutions Support data quality checks, exception handling, and metadata documentation Monitor, troubleshoot, and resolve ETL job issues and performance bottlenecks Ensure adherence to data governance and compliance standards throughout the development lifecycle Key Skills and Qualification 5-8 years of experience in ETL development with a focus on Informatica BDM/DEI Strong knowledge of data integration techniques , transformation logic, and job orchestration Proficiency in SQL , with the ability to write and optimize complex queries Experience working with Hadoop ecosystems (e.g., Hive, HDFS, Spark) and large-volume data processing Understanding of performance optimization in ETL and big data environments Familiarity with job scheduling tools and workflow orchestration (e.g., Control-M, Apache Airflow, Oozie) Good understanding of data warehousing , data lakes , and data modeling principles Experience working in Agile/Scrum environments Excellent analytical, problem-solving, and communication skills Good to have Experience with cloud data platforms (AWS Glue, Azure Data Factory, or GCP Dataflow) Exposure to Informatica IDQ (Data Quality) is a plus Knowledge of Python, Shell scripting, or automation tools Informatica or Big Data certifications
Posted 2 weeks ago
6.0 - 11.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Requisition ID # 25WD89173 Position Overview Autodesk is a world leader in 3D design software for manufacturing, simulation, construction, civil infrastructure, and entertainment. This role is a unique opportunity to maximize the success of Autodesk’s sales tool productivity investments. It requires the candidate to focus on the development of future state business process capabilities and to drive the elimination of redundancies, optimize efficiencies, lower costs, and streamline the delivery to our GTM teams. You will play an integral role in the technical development of critical operational tools and business workflows leveraging data for decision making. The role involves driving sales team productivity through creating future state best practices for business analytics. You will be responsible in creating analytical reports and develop BI tools with resulting action-oriented conclusions. Key Responsibilities Design, develop, and maintain scalable Power BI dashboards and reports, delivering actionable insights for business decision-making Build and deploy Power Automate flows to streamline and automate business processes and improve operational efficiency Translate complex data from various sources into meaningful visualizations using Power BI and other BI tools Develop bots and automation tools using Power Platform to improve reporting and workflow integration Collaborate with business stakeholders to gather requirements and deliver tailored BI solutions Demonstrate strong SQL proficiency to extract and manipulate data from multiple sources Apply data modeling and warehousing principles for BI solution architecture and performance optimization Conduct ad hoc data mining, statistical analysis, and reporting to support business needs Ensure clean, well-documented, and testable BI code with a focus on scalability and maintainability Leverage creativity and advanced visualization skills to present complex data simply and effectively Partner with cross-functional teams to integrate BI systems and align with organizational data infrastructure Provide technical solutions that improve business processes through automation and analytics Experience or knowledge in Snowflake, Amazon Redshift, and S3 is considered a strong advantage Stay current with industry trends and best practices in data analytics and business intelligence Education And Experience Bachelor’s or master’s degree in computer science, Engineering, Mathematics, Statistics, or related field 5+ years of relevant experience in business analytics, BI development, or data engineering Proven expertise in Power BI and Power Automate is essential Proficient in SQL and working with large datasets across multiple sources Hands-on experience in data modeling, scripting, and designing effective user interfaces Working knowledge of cloud data platforms such as Snowflake, Amazon Redshift, and Amazon S3 is highly desirable Experience with other BI tools like QlikView, Qlik Sense, Anaplan, or Looker is a plus Familiarity with HTML, JavaScript, and Big Data/Hadoop environments is a plus Basic knowledge of mobile app or bot development is advantageous Strong analytical mindset with advanced Excel and statistical analysis skills Excellent communication skills for both technical and non-technical audiences Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site).
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Python AWS Engineer GCL: D1 Introduction to role This is an outstanding opportunity for a senior engineer to advance modern software development practices within our team (DevOps/CI/CD/automated testing), building a bespoke integrated software framework (on-premise/cloud/COTS) which will accelerate the ability of AZ scientists to develop new drug candidates for unmet patient needs. To achieve this goal, we need a strong senior individual to work with teams of engineers, as well as engage and influence other global teams within Solution Delivery to ensure that our priorities are aligned with the needs of our science. The successful candidate will be a hands-on coder, passionate about software development and also willing to coach and enable wider teams to grow and expand their software delivery capabilities and skills. Accountabilities The role will encompass a variety of approaches with the aim of simplifying and streamlining scientific workflows, data, and applications, while advancing the use of AI and automation for use by scientists. Working alongside platform lead, architect, BA, and informaticians you will be working to understand, devise technical solutions, estimate and deliver and run operationally sustainable platform software. You need to use your technical acumen to determine an optimal balance between COTS and home-grown solutions and own their lifecycles and roadmap. Our delivery teams are distributed across multiple locations and as Senior Engineer you will need to coordinate activities of technical internal and contract employees. You must be capable of working with others, driving ownership of solutions, showing humility while striving to enable the development of platform technical team members in our journey. You will raise expectations within the whole team, solve complex technical problems and work alongside complementary delivery platforms while aligning solutions with scientific and data strategies and target architecture. Essential Skills/Experience 7 -10 years of experience in working with Python. Proven experience with Python for data manipulation and analysis. Strong proficiency in SQL and experience with relational databases. In-depth knowledge and hands-on experience with various AWS services (S3, Glue, VPC, Lambda Functions, Batch, Step Functions, ECS). Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK and CloudFormation. Experience with Snowflake or other data warehousing solutions. Knowledge of CI/CD processes and tools, specifically Jenkins and Docker. Experience with big data technologies such as Apache Spark or Hadoop is a plus. Strong analytical and problem-solving skills, with the ability to work independently and as part of a team. Excellent communication skills and ability to collaborate with cross-functional teams. Familiarity with data governance and compliance standards. Experience with process tools like JIRA, Confluence Experience of building unit tests, integration tests, system tests and acceptance tests Good team player, and the attitude to work with the highest integrity. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Here you can innovate, take ownership, explore new solutions, experiment with leading-edge technology, and tackle challenges in a modern technology environment. Ready to make an impact? Apply now! Date Posted 14-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! Description: We are seeking a talented Lead Software Engineer – Performance to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. You will lead the performance engineering efforts across Spark, Kafka, Elasticsearch, and Middleware APIs, ensuring that our real-time data pipelines and services meet enterprise-grade SLAs. As part of our high-performing engineering team, you will design and execute performance testing strategies, identify system bottlenecks, and work with development teams to implement performance improvements that support billions of cyber security events processing a day across our data platform. Responsibilities: Own the performance strategy across distributed systems which includes Hadoop, Spark, Kafka, Elasticsearch/OpenSearch, Big Data Components and APIs for each release. Define, develop, and execute performance test plans, load tests, stress tests, and soak tests. Create realistic performance test scenarios for data pipelines and microservices based on production-like workloads. Proactively identify bottlenecks, resource contention, and latency issues using tools such as JMeter, Spark UI, Kafka Manager, Elastic Monitoring and App Dynamics. Provide deep-dive analysis and recommendations on tuning and scaling Spark jobs, Kafka topics/partitions, ES queries, and API endpoints. Collaborate with developers, architects, and infrastructure teams to integrate performance feedback into design and implementation. Simulate and benchmark real-time and batch data flow at scale using synthetic and production-like datasets and own this framework end to end for synthetic data generator. Lead the initiative to build a performance testing framework that integrates with CI/CD pipelines. Establish and track SLAs for throughput, latency, CPU/memory utilization and Garbage collection. Create performance dashboards and visualization using Prometheus/Grafana, Kibana, or equivalent. Document performance test findings and create technical reports for leadership and engineering teams. Recommend performance optimization to Dev and Platform groups. Responsible for optimizing the overall cost. Contribute to feature development and fixes apart from performance benchmarking. Qualifications: Bachelor's degree in computer science, Engineering, or related field. 8+ years of overall experience in distributed systems and backend performance engineering. 4+ years of JAVA development experience with Microservices architecture. Proficient in scripting (Python, Bash) for automation and test data generation. 4+ years of hands-on experience with Apache Spark – performance tuning, memory management, and DAG optimization. 3+ years of experience with Kafka – topic optimization, producer/consumer tuning, and lag monitoring. 3+ years of experience with Elasticsearch/OpenSearch – query profiling, indexing strategies, and cluster optimization. 3+ years of experience with performance testing tools such as JMeter or similar. Excellent programming and designing skills and Hands-on experience on Spring, Hibernate. Deep understanding of middleware and microservices performance including REST APIs. Strong knowledge of profiling, debugging, and observability tools (e.g., Spark UI, Athena, Grafana, ELK). Experience designing and running benchmarks at scale for high-throughput environments in PBs. Experience with containerized workloads and performance testing in Kubernetes/Docker environments. Solid understanding of cloud-native architecture (OCI) and distributed systems design. Strong knowledge of Linux operating systems and performance related improvements. Familiarity with CI/CD integration for performance testing (e.g., Jenkins, GitHub). Knowledge of data lake architecture, caching solutions, and message queues. Strong communication skills and experience influencing cross-functional engineering teams. Additional Plus Competencies: Prior experience in any analytics platform on Big Data would be a huge plus.
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* As a Senior Hadoop Developer to develop Hadoop components in SDP (strategic data platform), individual will be responsible for understanding design, propose high level and detailed design solutions, and ensure that coding practices/quality comply with software development standards. Working as an individual contributor in projects, person should have good analytical skills to take a quick decision during the tough times, Person should have good knowledge writing complex queries in a larger cluster. Engage in discussions with architecture teams for coming out with design solutions, proposing new technology adoption ideas, attending project meetings, partnering with near shore and offshore teammates in an agile environment, coordinating with other application teams, development, testing, upstream and downstream partners, etc. Responsibilities: Develop high-performance and scalable Analytics solutions using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels. Develop efficient utilities, data pipelines, ingestion frameworks that can be utilized across multiple business areas. Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows. Data analysis, coding, Performance Tunning, propose improvement ideas, drive the development activities at offshore. Analyze complex Hive Queries, able to modify Hive queries, tune Hive Queries Hands on experiences writing scripts in python/shell scripts and modify scripts. Provide guidance and mentorship to junior teammates. Work with the strategic partners to understand the requirements work on high level & detailed design to address the real time issues in production. Partnering with near shore and offshore teammates in Agile environment, coordinating with other application teams, development, testing, up/down stream partners, etc. Hands on experiences writing scripts in python/shell scripts and modify scripts. Work on multiple projects concurrently, take ownership & pride in the work done by them, attending project meetings, understanding requirements, designing solutions, developing code. Identify gaps in technology and propose viable solutions. Identify improvement areas within the application and work with the respective teams to implement the same. Ensuring adherence to defined process & quality standards, best practices, high quality levels in all deliverables. Desired Skills* Data Lake Architecture: Understanding of Medallion architecture ingestion Frameworks: Knowledge of ingestion frameworks like structured, unstructured, and semi structured Data Warehouse: Familiarity with Apache Hive and Impala Performs Continuous Integration and Continuous Development (CI-CD) activities. Hands on experience working in a Cloudera data platform (CDP) to support the Data Science Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Extensive hands-on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment) Experience with model deployment, scoring and monitoring for batch and real-time on various technologies and platforms. Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration. Experience in automation for deployment using Ansible Playbooks, scripting. Experience with developing and building RESTful API services in an efficient and scalable manner. Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs) Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training. Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Requirements* Education* Graduation / Post Graduation Experience Range* 7 to 9 years Foundational Skills Hadoop, Hive, Sqoop, Impala, Unix/Linux scripts. Desired Skills Python, CI/CD, ETL. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Chennai
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for an experienced professional with over 7 years of experience to join our Production/Application support team. The ideal candidate should possess a strong technical skill set including Unix, SQL, ITIL, Autosys, and Big Data technologies. Additionally, expertise in financial services domains such as Securities, secured financing, rates, Liquidity reporting, Derivatives, and front office/back-office systems is highly desirable. As a member of our team, your key responsibilities will include providing L2 production support for critical liquidity reporting and financial applications to ensure high availability and performance. You will be tasked with monitoring and resolving incidents related to trade capture, batch failures, market data, pricing, risk, and liquidity reporting. Additionally, you will proactively manage alerts, logs, and jobs using tools such as Autosys, Unix, and monitoring platforms like ITRS/AWP. Advanced SQL queries and scripts will be executed by you for data analysis, validation, and issue resolution. Your role will involve supporting multiple applications built on technologies like stored procedures, SSIS, SSRS, and Big Data ecosystems such as hive, spark, and Hadoop. Furthermore, you will be responsible for maintaining knowledge bases, SOPs, and runbooks for production support, participating in change management and release activities, leading root cause analysis (RCA), conducting post-incident reviews, and collaborating with infrastructure teams on capacity, performance, and system resilience initiatives to ensure continuous service improvement and automation. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field. A minimum of 7 years of experience in application or production support, with at least 2 years at an advanced level, is required. Proficiency in Unix/Linux scripting, SQL (MSSQL/Oracle), Big Data technologies, job schedulers like Autosys, log analysis tools, and a solid understanding of financial instruments and trade lifecycles are essential. Moreover, excellent analytical and problem-solving skills, effective communication, stakeholder management abilities, and familiarity with ITIL processes are crucial for this position. Join us in contributing to continuous service improvement, stability management, and automation initiatives in the financial services domain. Citi is an equal opportunity employer. If you are a person with a disability and require accommodation to use our search tools or apply for a career opportunity, please review the Accessibility at Citi guidelines.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
You are a Data Engineer with 3+ years of experience, proficient in SQL and Python development. You will be responsible for designing, developing, and maintaining scalable data pipelines to support ETL processes using tools like Apache Airflow, AWS Glue, or similar. Your role involves optimizing and managing relational and NoSQL databases such as MySQL, PostgreSQL, MongoDB, or Cassandra for high performance and scalability. You will write advanced SQL queries, stored procedures, and functions to efficiently extract, transform, and analyze large datasets. Additionally, you will implement and manage data solutions on cloud platforms like AWS, Azure, or Google Cloud, utilizing services such as Redshift, BigQuery, or Snowflake. Your contributions to designing and maintaining data warehouses and data lakes will support analytics and BI requirements. Automation of data processing tasks through script and application development in Python or other programming languages is also part of your responsibilities. As a Data Engineer, you will implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions is essential. Identifying and resolving performance bottlenecks in data systems, optimizing data storage, and retrieval are key aspects. Maintaining comprehensive documentation for data processes, pipelines, and infrastructure is crucial. Staying up-to-date with the latest trends in data engineering, big data technologies, and cloud services is expected from you. You should hold a Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field. Proficiency in SQL, relational databases, NoSQL databases, Python programming, and experience with data pipeline tools and cloud platforms is required. Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Strong analytical and problem-solving skills with a focus on performance optimization and scalability are essential. Excellent verbal and written communication skills are necessary to convey technical concepts to non-technical stakeholders. You should be able to work collaboratively in cross-functional teams. Preferred certifications include AWS Certified Data Analytics, Google Professional Data Engineer, or similar. An eagerness to learn new technologies and adapt quickly in a fast-paced environment is a mindset that will be valuable in this role.,
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Greater Chennai Area
On-site
Job Description Lead and mentor a team of data scientists/analysts. Provide analytical insights by analyzing various types of data, including mining our customer data, review of relevant cases/samples, and incorporation of feedback from others. Work closely with business partners and stakeholders to determine how to design analysis, testing, and measurement approaches that will significantly improve our ability to understand and address emerging business issues. Produce intelligent, scalable, and automated solutions by leveraging Data Science skills. Work closely with Technology teams on the development of new capabilities to define requirements and priorities based on data analysis and business knowledge. Developing expertise in specific areas by leading analytical projects independently, while setting goals, providing benefit estimations, defining workflows, and coordinating timelines in advance. Providing updates to leadership, peers, and other stakeholders that will simplify and clarify complex concepts and the results of analyses effectively, with emphasis on the actionable outcomes and impact on business. Requirements 2 to 5 years in advanced analytics, statistical modelling, and machine learning. Best practice knowledge in credit risk - strong understanding of the full lifecycle from origination to debt collection. Well-versed with ML algos, BIG data concepts, and cloud implementations. High proficiency in Python and SQL/NoSQL. Collections and Digital Channels are a plus. Strong organizational skills and excellent follow-through. Outstanding written, verbal, and interpersonal communication skills. High emotional intelligence, a can-do mentality, and a creative approach to problem solving. Takes personal ownership, Self-starter - ability to drive projects with minimal guidance and focus on high-impact work. Learns continuously; Seeks out knowledge, ideas, and feedback. Look for opportunities to build one's skills, knowledge, and expertise. Experience with big data and cloud computing, viz. Spark, Hadoop (MapReduce, PIG, HIVE) Experience in risk and credit score domains preferred. (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 4-6 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a member of the Google Cloud Consulting Professional Services team, you will have the opportunity to contribute to the success of businesses by guiding them through their cloud journey and leveraging Google's global network, data centers, and software infrastructure. Your role will involve assisting customers in transforming their businesses by utilizing technology to connect with customers, employees, and partners. Your responsibilities will include interacting with stakeholders to understand customer requirements and providing recommendations for solution architectures. You will collaborate with technical leads and partners to lead migration and modernization projects to Google Cloud Platform (GCP). Additionally, you will design, build, and operationalize data storage and processing infrastructure using Cloud native products, ensuring data quality and governance procedures are in place to maintain accuracy and reliability. In this role, you will work on data migrations, modernization projects, and design data processing systems optimized for scaling. You will troubleshoot platform/product tests, understand data governance and security controls, and travel to customer sites to deploy solutions and conduct workshops to educate and empower customers. Furthermore, you will be responsible for translating project requirements into goals and objectives, creating work breakdown structures to manage internal and external stakeholders effectively. You will collaborate with Product Management and Product Engineering teams to drive excellence in products and contribute to the digital transformation of organizations across various industries. By joining this team, you will play a crucial role in shaping the future of businesses of all sizes and assisting them in leveraging Google Cloud to accelerate their digital transformation journey.,
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
General Purpose Corporation is a niche multinational software company headquartered in Princeton, NJ, USA. We have multiple offices present across US, Europe, Japan and India. RxLogixs enterprise software applications are used by worlds Top 10 pharma companies like Merck, Johnson & Johnson, Novartis,etc. RxLogix is seeking experienced technical architects who will be responsible for the overall productarchitecture and design. Selected candidates will have an opportunity to architect, design and driveour products meant to ensure patient safety around the world. The ideal candidate should have good understanding of the wider enterprise landscape and strong experience of architecting and designing enterprise grade solutions. The candidate should have extensive experience in software development and architecture, design patterns, databases,integration, usage of third-party libraries and tools. Essential Duties & Responsibilities Excellent understanding of systems and application architecture, high availability, reliability, scalability, layered security, cloud architecture, Micro-Services Framework etc. Define the overall technical architecture for the system for functional and non-functional requirements Implement using latest design patterns with the objective of ZERO maintenance and high performance Hands-on approach in performance tuning, debugging, framework setup, refactoring and supporting the team during the development phase. Preparing technical solution and architecture documents, lead POCs and product certification initiatives. Providing hardware sizing and deployment topology recommendations based on needs of client. Enforce sound development practices and ensure quality delivery of enterprise solutions. Ability to multitask and be able to work in fast paced dynamic environment. Lead the development across a variety of technologies including : Java, jQuery, JavaScript, Grails, Groovy, Spring, Hibernate Cloud solutions in AWS Oracle database Big data technologies Business Intelligence tools Minimum Requirement s University degree, preferably B-Tech/MS/MTech . Overall 8-10+ years of experience with minimum 5+ years on Product technical design and architecture for enterprise grade solutions. Strong experience utilizing J2EE architecture and design. Experience with Oracle or another major RDBMS. Experience in AWS technologies. Knowledge and experience of architecting/designing enterprise applications following Micro Services framework model. Knowledge and experience with aspects of identity management and security. Knowledge of big data technologies including Hadoop, NoSQL data stores and diverse analytics areas. Knowledge and working experience of following frameworks & technologies is a big plus: Docker, Kubernetes, Kafka, Redis, Hazelcast, Kibana, RabbitMQ. Experience applying agile development methodologies and associated tools. Experience with Atlassian stack (Jira/Confluence etc.). Experience with DevOps concepts and tools. Work with geographically distributed development teams. Experience as Solution Architect in Health and Life Science environments is a plus Interpersonal Skills Demonstrated ability to meet commitments, build consensus, negotiate resolutions, and garner respect from other teams. Needs to have strong experience on integration and should have an excellent ability to see the big picture. Must be detail-oriented with strong organizational skills and possess the ability to work well under deadlines in a changing environment. (ref:hirist.tech)
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Senior Data Scientist / AI Engineer with 8-10 years of experience, you will be a vital part of our team working on developing an AI-enabled ERP solution. Your primary responsibilities will include designing scalable AI models, implementing ML pipelines, and exploring the latest AI technologies to drive innovation. Additionally, you will play a crucial role in managing and mentoring a team of data scientists and AI engineers to foster a collaborative and efficient work environment. Your key responsibilities will involve designing and building AI models from scratch, optimizing machine learning algorithms, and integrating them into our ERP solution. You will also be responsible for developing and maintaining an AI development and deployment pipeline using cloud and containerized solutions. Furthermore, conducting R&D activities to identify opportunities for AI-driven automation, working on sentiment analysis, NLP, and computer vision tasks, and deploying ML models while ensuring seamless integration into production systems will be part of your role. Collaboration with product teams to align AI strategies with business objectives, leading and mentoring a team of data scientists and AI engineers, and fostering a collaborative team culture are essential aspects of this role. Your expertise in Python and ML frameworks, strong understanding of ML algorithms, experience with deep learning architectures, NLP, and computer vision, along with hands-on experience in deploying AI models using Flask APIs, Docker, and Kubernetes are critical for success in this position. Preferred skills include knowledge of Java, experience with software development best practices, understanding of SDLC, version control, CI/CD pipelines, and experience with Big Data technologies. If you are passionate about working with AI-driven products and have a proven track record of solving real-world challenges, we are excited to hear from you! This is a full-time position located in Trivandrum/Kochi, with a remote option available initially. If you have a minimum of 8 years of experience in AI, Machine Learning, or Data Science roles, and have worked on AI model deployment or building AI pipelines, we encourage you to apply. Leadership and team management experience, along with the ability to guide and develop junior team members, are essential requirements for this role. Your strong problem-solving skills, ability to articulate AI concepts effectively, and experience in interacting with clients to communicate AI-driven solutions will be highly valued.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing sustainable businesses that are future-ready. Our workforce of over 230,000 employees and business partners spread across 65 countries ensures that we fulfill our commitment to helping customers, colleagues, and communities thrive amidst a constantly changing world. As a Databricks Developer at Wipro, you will be expected to possess the following essential skills: - Cloud certification in Azure Data Engineer or related category - Proficiency in Azure Data Factory, Azure Databricks Spark (PySpark or Scala), SQL, Data Ingestion, and Curation - Experience in Semantic Modelling and Optimizing data models to function within Rahona - Familiarity with Azure data ingestion from on-prem sources such as mainframe, SQL server, and Oracle - Proficiency in Sqoop and Hadoop - Ability to use Microsoft Excel for metadata files containing ingestion requirements - Any additional certification in Azure/AWS/GCP and hands-on experience in cloud data engineering - Strong programming skills in Python, Scala, or Java This position is available in multiple locations including Pune, Bangalore, Coimbatore, and Chennai. The mandatory skill set required for this role is DataBricks - Data Engineering. The ideal candidate should have 5-8 years of experience in the field. At Wipro, we are in the process of building a modern organization that is committed to digital transformation. We are seeking individuals who are driven by the concept of reinvention - of themselves, their careers, and their skills. We encourage a culture of continuous evolution within our business and industry, adapting to the changing world around us. Join us in a purpose-driven environment that empowers you to craft your own reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are highly encouraged.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a part of ZS, you will have the opportunity to work in a place driven by passion that aims to change lives. ZS is a management consulting and technology firm that is dedicated to enhancing life and its quality. The core strength of ZS lies in its people, who work collectively to develop transformative solutions for patients, caregivers, and consumers worldwide. By adopting a client-first approach, ZS employees bring impactful results to every engagement by partnering closely with clients to design custom solutions and technological products that drive value and yield positive outcomes in key areas of their business. Your role at ZS will require you to bring inquisitiveness for learning, innovative ideas, courage, and dedication to make a life-changing impact. At ZS, the individuals are highly valued, recognizing both the visible and invisible facets of their identities, personal experiences, and belief systems. These elements shape the uniqueness of each individual and contribute to the diverse tapestry within ZS. ZS acknowledges and celebrates personal interests, identities, and the thirst for knowledge as integral components of success within the organization. Learn more about the diversity, equity, and inclusion initiatives at ZS, along with the networks that support ZS employees in fostering community spaces, accessing necessary resources for growth, and amplifying the messages they are passionate about. As an Architecture & Engineering Specialist specializing in ML Engineering at ZS's India Capability & Expertise Center (CEC), you will be part of a team that constitutes over 60% of ZS employees across three offices in New Delhi, Pune, and Bengaluru. The CEC plays a pivotal role in collaborating with colleagues from North America, Europe, and East Asia to deliver practical solutions to clients that drive the company's operations. Upholding standards of analytical, operational, and technological excellence, the CEC leverages collective knowledge to enable ZS teams to achieve superior outcomes for clients. Joining ZS's Scaled AI practice within the Architecture & Engineering Expertise Center will immerse you in a dynamic ecosystem focused on generating continuous business value for clients through innovative machine learning, deep learning, and engineering capabilities. In this role, you will collaborate with data scientists to craft cutting-edge AI models, develop and utilize advanced ML platforms, establish and implement sophisticated ML pipelines, and oversee the entire ML lifecycle. **Responsibilities:** - Design and implement technical features using best practices for the relevant technology stack - Collaborate with client-facing teams to grasp the solution context, contribute to technical requirement gathering and analysis - Work alongside technical architects to validate design and implementation strategies - Write production-ready code that is easily testable, comprehensible to other developers, and addresses edge cases and errors - Ensure top-notch quality deliverables by adhering to architecture/design guidelines, coding best practices, and engaging in periodic design/code reviews - Develop unit tests and higher-level tests to handle expected edge cases, errors, and optimal scenarios - Utilize bug tracking, code review, version control, and other tools for organizing and delivering work - Participate in scrum calls, agile ceremonies, and effectively communicate progress, issues, and dependencies - Contribute consistently by researching and evaluating the latest technologies, conducting proofs-of-concept, and creating prototype solutions - Aid the project architect in designing modules/components of the overall project/product architecture - Break down large features into estimable tasks, lead estimation, and defend them with clients - Independently implement complex features with minimal guidance, such as service or application-wide changes - Systematically troubleshoot code issues/bugs using stack traces, logs, monitoring tools, and other resources - Conduct code/script reviews of senior engineers within the team - Mentor and cultivate technical talent within the team **Requirements:** - Minimum 5+ years of hands-on experience in deploying and productionizing ML models at scale - Proficiency in scaling GenAI or similar applications to accommodate high user traffic, large datasets, and reduce response time - Strong expertise in developing RAG-based pipelines using frameworks like LangChain & LlamaIndex - Experience in crafting GenAI applications such as answering engines, extraction components, and content authoring - Expertise in designing, configuring, and utilizing ML Engineering platforms like Sagemaker, MLFlow, Kubeflow, or other relevant platforms - Familiarity with Big data technologies including Hive, Spark, Hadoop, and queuing systems like Apache Kafka/Rabbit MQ/AWS Kinesis - Ability to quickly adapt to new technologies, innovate in solution creation, and independently conduct POCs on emerging technologies - Proficiency in at least one Programming language such as PySpark, Python, Java, Scala, etc., and solid foundations in Data Structures - Hands-on experience in building metadata-driven, reusable design patterns for data pipeline, orchestration, and ingestion patterns (batch, real-time) - Experience in designing and implementing solutions on distributed computing and cloud services platforms (e.g., AWS, Azure, GCP) - Hands-on experience in constructing CI/CD pipelines and awareness of application monitoring practices **Additional Skills:** - AWS/Azure Solutions Architect certification with a comprehensive understanding of the broader AWS/Azure stack - Knowledge of DevOps CI/CD, data security, and experience in designing on cloud platforms - Willingness to travel to global offices as required to collaborate with clients or internal project teams **Perks & Benefits:** ZS provides a holistic total rewards package encompassing health and well-being, financial planning, annual leave, personal growth, and professional development. The organization offers robust skills development programs, various career progression options, internal mobility paths, and a collaborative culture that empowers individuals to thrive both independently and as part of a global team. ZS is committed to fostering a flexible and connected work environment that enables employees to combine work from home and on-site presence at clients/ZS offices for the majority of the week. This approach allows for the seamless integration of the ZS culture and innovative practices through planned and spontaneous face-to-face interactions. **Travel:** Travel is an essential aspect of working at ZS, especially for client-facing roles. Business needs dictate the priority for travel, and while some projects may be local, all client-facing employees should be prepared to travel as required. Travel opportunities provide avenues to strengthen client relationships, gain diverse experiences, and enhance professional growth through exposure to different environments and cultures. **Application Process:** Candidates must either possess or be able to obtain work authorization for their intended country of employment. To be considered, applicants must submit an online application, including a complete set of transcripts (official or unofficial). *Note: NO AGENCY CALLS, PLEASE.* For more information, visit [ZS Website](www.zs.com).,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers, and consumers worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning, bold ideas, courage, and passion to drive life-changing impact to ZS. Our most valuable asset is our people. At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. As a Senior Cloud Site Reliability Engineer at ZS, you will be part of the CCoE (Cloud Center of Excellence) Team. This team builds, maintains, and helps architect the systems enabling ZS client-facing software solutions. The CCoE team defines and implements best practices to ensure performant, resilient, and secure cloud solutions. The team comprises analytical problem solvers from diverse backgrounds who share a passion for quality delivery, whether the customer is a client or another ZS employee. The team has a presence in ZS's Evanston, Illinois, and Pune, India offices. **What You'll Do:** Acting as a Senior Cloud Site Reliability Engineer, you will work with a team of operations engineers and software developers to analyze, maintain, and nurture our Cloud solutions/products to support the ever-growing company's clientele. As a technical expert, you will closely collaborate with various teams to ensure the stability of the environment by: - Analyzing the current state, designing appropriate solutions, and working with the team to implement them. - Coordinating emergency responses, performing root cause analysis, identifying and implementing solutions to prevent re-occurrences. - Working with the team to identify ways to increase MTBF and lower MTTR for the environment. - Reviewing the entire application stack and executing initiatives to reduce failures, defects, and issues with the overall performance. - Identifying and working with the team to implement more efficient system procedures. - Maintaining environment monitoring systems to provide the best visibility into the state of the deployed products/solutions. - Performing root cause analysis on incoming infrastructure alerts and working with teams to resolve them. - Maintaining performance analysis tools, identifying any adverse changes to performance and working with the teams to resolve them. - Researching industry trends and technologies and promoting adoption of best-in-class tools and technologies. - Taking the initiative to advance the quality, performance, or scalability of our Cloud Solutions by influencing the architecture or design of our products. - Designing, developing, and executing automated tests to validate solutions and environments. - Troubleshooting issues across the entire stack - infrastructure, software, application, and network. **What You'll Bring:** - 3+ years experience working as a Site Reliability Engineer or an equivalent position. - 2+ years experience with AWS cloud technologies and at least one AWS certification (Solution Architect / DevOps Engineer) is required. - 1+ years experience functioning as a senior member in an infrastructure/software team. - Hands-on experience with AWS services like EC2, RDS, EMR, CloudFront, ELB, API Gateway, CodeBuild, AWS Config, Systems Manager, Service Catalog, Lambda, etc. - Full-stack IT experience with *nix, Windows, network/firewall concepts, source control (BitBucket), and build/dependency management and continuous integration systems (TeamCity, Jenkins). - Expertise in at least one scripting language, Python preferred. - Firm understanding of application reliability, performance tuning, and scalability. - Exposure to big data technologies (Spark, Hadoop, Scala, etc.) stack is preferred. - Solid knowledge of infrastructure and cloud-native services along with network technologies. - Solid understanding of RDBMS and Cloud Database engines like Postgres SQL, MySQL, etc. - Firm understanding of Clusters, Load balancers, and CDN. - Experience in fault-tolerant system design. - Familiarity with Splunk data analysis, Datadog, or similar tools is a plus. - A Bachelor's degree (Master's preferred) in a related technical field. - Excellent analytical, troubleshooting, and communication skills. - Strong verbal, written, and team presentation communication skills. Fluency in English is required. - Initiative and the ability to remain flexible and responsive in a dynamic environment. - Ability to quickly learn new platforms, languages, tools, and techniques as needed to meet project requirements. **Perks & Benefits:** ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth, and professional development. Our robust skills development programs, multiple career progression options, internal mobility paths, and collaborative culture empower you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. **Travel:** Travel is a requirement at ZS for client-facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. **To Complete Your Application:** Candidates must possess or be able to obtain work authorization for their intended country of employment. An online application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Are you passionate about developing mission-critical, high-quality software solutions using cutting-edge technology in a dynamic environment Join Compliance Engineering, a global team of over 300 engineers and scientists working on the most complex, mission-critical problems. You will build and operate platforms and applications to prevent, detect, and mitigate regulatory and reputational risks, leveraging the latest technology and vast amounts of data. As part of a significant uplift and rebuild of the Compliance application portfolio, Compliance Engineering is seeking Systems Engineers. As a member of the team, you will partner with users, development teams, and colleagues globally to onboard new business initiatives, test Compliance Surveillance coverage, learn from experts, and mentor team members. You will work with technologies like Java, Python, PySpark, and Big Data tools to innovate, design, implement, test, and maintain software across products. The ideal candidate will have a Bachelor's or Master's degree in Computer Science or a related field, expertise in Java development, debugging, and problem-solving, as well as experience in project management. Strong communication skills are essential. Desired experience includes relational databases, Hadoop, big data technologies, knowledge of the financial industry (especially Capital Markets domain), compliance, or risk functions. Goldman Sachs, a leading global investment banking, securities, and investment management firm, commits to diversity, inclusion, and individual growth. They provide various opportunities for professional and personal development, fostering a culture of diversity and inclusion. Goldman Sachs is an equal employment/affirmative action employer. Accommodations for candidates with special needs or disabilities are available during the recruiting process. Learn more at GS.com/careers.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
The Artificial Intelligence Lead is responsible for overseeing the development and implementation of AI strategies and projects. This role requires a combination of technical expertise, leadership skills, and strategic thinking. You will work closely with cross-functional teams to integrate AI solutions into products and services. Key Responsibilities: Lead the AI team in developing and deploying AI models and algorithms. Define the AI strategy in alignment with the company's goals and objectives. Oversee the end-to-end AI project life cycle, from conceptualization to deployment. Collaborate with stakeholders to identify opportunities for AI-driven solutions. Ensure the ethical and responsible use of AI technologies. Stay updated with the latest advancements in AI and incorporate relevant innovations. Provide mentorship and guidance to team members. Manage the AI team's resources, budget, and timelines. Communicate AI project progress and insights to executive leadership. Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Proven experience (5+ years) in AI, machine learning, and data science. Team leading experience. Strong understanding of machine learning algorithms, neural networks, and natural language processing. Experience with AI development frameworks and tools (e.g., TensorFlow, PyTorch, Scikit learn). Proficiency in programming languages such as Python, R, or Java. Experience in managing and leading technical teams. Excellent problem-solving skills and ability to work on complex projects. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure). Knowledge of AI ethics and regulatory considerations. Prompt with ML, Prompt Engineering, and AI Models. Experience in deploying AI solutions in production environments. Publications or contributions to AI research. Understanding of ISO Standards: Knowledge of relevant ISO standards (e.g., ISO 9001, 27001, and 27701) and their application in the workplace. Location: Mohali/ Noida Equal Employment Opportunity.,
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Jr. Software Engineer (AI ML) at SynapseIndia, you will have the opportunity to work with cutting-edge technologies and contribute to the development of innovative solutions. With a focus on AI/ML, you will play a crucial role in designing, developing, and deploying machine learning models and AI algorithms using Python and relevant libraries. To excel in this role, you should hold a Bachelors or Masters degree in Computer Science, Engineering, Mathematics, or a related field. Additionally, having 0-2 years of professional experience in Python programming with a specialization in AI/ML is essential. Your strong experience with Python ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, along with a solid understanding of machine learning algorithms, neural networks, and deep learning will be highly valuable. Furthermore, your proficiency in data manipulation libraries such as Pandas, NumPy, and data visualization tools like Matplotlib, Seaborn, as well as experience with cloud platforms like AWS, GCP, Azure, and deploying ML models using Docker, Kubernetes will be beneficial. Any familiarity with NLP, Computer Vision, or other AI domains will be considered a plus. Your responsibilities will include collaborating with cross-functional teams to gather requirements and translate business problems into AI/ML solutions. You will be optimizing and scaling machine learning pipelines and systems for production, performing data preprocessing, feature engineering, and exploratory data analysis, and implementing and fine-tuning deep learning models using frameworks like TensorFlow, PyTorch, or similar. Additionally, you will be expected to conduct experiments, evaluate model performance using statistical methods, write clean, maintainable, and well-documented code, mentor junior developers, participate in code reviews, and stay up-to-date with the latest AI/ML research and technologies. Ensuring seamless model deployment and integration with existing infrastructure will also be part of your role. If you are a proactive individual with strong problem-solving skills, the ability to work independently and collaboratively, excellent communication skills, and familiarity with REST APIs, microservices architecture, version control systems like Git, MLOps best practices and tools, distributed computing, and big data tools like Spark or Hadoop, we encourage you to apply for this exciting opportunity at SynapseIndia. Join us and be a part of our dynamic team where your contributions are recognized and rewarded, and where you can grow both personally and professionally in a structured, eco-friendly workplace that prioritizes the well-being and job security of its employees.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a skilled professional, your primary responsibility will involve designing and implementing cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to tackle specific business challenges. You will be tasked with creating conversational AI agents and chatbots that provide seamless, human-like interactions, tailored to meet client needs. Additionally, you will develop and optimize Retrieval-Augmented Generation (RAG) models to enhance AI's ability to retrieve and synthesize pertinent information for accurate responses. Your expertise will be leveraged in managing data lakes, data warehouses (including Snowflake), and utilizing Databricks for large-scale data storage and processing. You are expected to have a thorough understanding of Machine Learning Operations (MLOps) practices and manage the complete lifecycle of machine learning projects, from data preprocessing to model deployment. You will play a crucial role in conducting advanced data analysis to extract actionable insights and support data-driven strategies across the organization. Collaborating with stakeholders from various departments, you will align AI initiatives with business requirements to develop scalable solutions. Additionally, you will mentor junior data scientists and engineers, encouraging innovation, skill enhancement, and continuous learning within the team. Staying updated on the latest advancements in AI and deep learning, you will experiment with new techniques to enhance model performance and drive business value. Effectively communicating findings to both technical and non-technical audiences through reports, dashboards, and visualizations will be part of your responsibilities. Furthermore, you will utilize cloud platforms like AWS Bedrock to deploy and manage AI models at scale, ensuring optimal performance and reliability. Your technical skills should include hands-on experience with PyTorch, TensorFlow, and scikit-learn for deep learning and machine learning tasks. Proficiency in Python or R programming, along with knowledge of big data technologies like Hadoop and Spark, is essential. Familiarity with MLOps, data handling tools such as pandas and dask, and cloud computing platforms like AWS is required. Skills in LLAMAIndex and LangChain frameworks, as well as data visualization tools like Tableau and Power BI, are desirable. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Specialization in deep learning, significant experience with PyTorch and TensorFlow, and familiarity with reinforcement learning, NLP, and generative models are expected. In addition to challenging work, you will enjoy a friendly work environment, work-life balance, company-sponsored medical insurance, a 5-day work week with flexible timings, frequent team outings, and yearly leave encashment. This exciting opportunity is based in Ahmedabad.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |