Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 - 0 Lacs
andhra pradesh
On-site
You are a resilient, forward-thinking Python Developer with expertise in AI and Machine Learning deployment, interested in joining our growing technology team as a Python with AI/ML Developer. Your role will involve building, deploying, and maintaining scalable ML models and backend systems to support AI-driven products. To excel in this position, you must have a passion for innovation, attention to detail, and the ability to solve complex problems with effective technical solutions. Your responsibilities will include developing, testing, and deploying machine learning models using Python and AI frameworks such as TensorFlow, PyTorch, and scikit-learn. You will collaborate with data scientists, product teams, and engineers to transition prototypes into production-ready systems. Additionally, you will build and manage APIs and backend services using frameworks like Flask and FastAPI, deploy ML models in cloud or on-prem environments using Docker, Kubernetes, and CI/CD pipelines, and optimize model performance post-deployment. As a successful candidate, you should hold a Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field and have at least 2 years of experience in Python development with a focus on AI/ML projects. You must possess strong knowledge of ML/AI tools and frameworks, experience deploying and maintaining ML models in production environments, proficiency in containerization and orchestration tools, and familiarity with REST APIs and microservice architecture. Knowledge of cloud platforms, problem-solving skills, and a proactive, team-oriented mindset are essential for this role. Preferred skills include experience with data streaming platforms, familiarity with MLOps tools, exposure to computer vision, NLP, or deep learning projects, knowledge of database systems, experience with interactive dashboard tools, and contributions to open-source AI/ML projects or Kaggle competitions. Staying current with the latest research and best practices in AI, ML, and MLOps, as well as documenting technical architecture and best practices are crucial aspects of this role. Troubleshooting and resolving issues related to ML model performance, deployment, and data pipelines will also be part of your responsibilities.,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
karnataka
On-site
We help the world run better. At SAP, you are encouraged to bring out your best. Our organizational culture is centered around collaboration and a shared dedication to improving the world's functionality. We focus daily on laying the groundwork for the future and cultivating a workplace that celebrates diversity, values adaptability, and is dedicated to our purpose-driven and forward-thinking projects. We provide a highly collaborative and supportive team environment that emphasizes continuous learning and growth, acknowledges your unique contributions, and offers a range of benefits for you to select from. In this role, you will: - Collaborate with a team of skilled and motivated developers. - Engage with stakeholders to grasp business requirements and transform them into technical solutions. - Develop and implement scalable, efficient code following industry best practices. - Take ownership of your code throughout all stages of the software development lifecycle, encompassing ideation, design, development, quality assurance, and customer support. - Develop a comprehensive understanding of features spanning various components. - Adhere to the agile engineering practices and processes adopted by the team. - Produce and maintain detailed technical documentation and design specifications. - Mentor junior developers and offer guidance on best practices and technical strategies. Desired Qualifications: - An engineering graduate with a minimum of 9 years of overall software development experience. - Proficiency in web-based cloud applications utilizing the JavaScript technology stack (JavaScript, TypeScript, NodeJs, React, Angular, etc.). - Extensive experience in all facets of software development such as design, architecture, and object-oriented programming. - Strong analytical skills to assess complex problems and propose effective solutions. - Willingness to continuously learn new technologies and business processes. - Experience in cloud development processes and microservices architecture. - Excellent collaboration and communication skills, with a background in working with diverse teams across multiple time zones. - Hands-on experience in Automation testing with tools like Mocha, Cypress automation, or equivalent. - Proficiency in SAP UI5, SQL, RDBSM, or HANA would be advantageous. - Involvement in organizational initiatives such as hiring, representing the team in internal and external forums, and mentoring others. - Knowledge of Apache Spark and Python is a plus. Join the Business Data Cloud (BDC) team, which offers customers a unified data management layer providing access to all application data through curated data products. The BDC organization delivers a Business Data Fabric and Insight Applications that include data warehousing, cross-analytical capabilities, planning, and benchmarking tools to derive actionable insights within customers" business processes. This cloud service is designed for ease of use and open for customer and partner extensions across all enterprise lines of business. At SAP, we believe in fostering an inclusive workplace, promoting health and well-being, and offering flexible working arrangements to ensure that every individual, regardless of background, feels included and can perform at their best. We are committed to the values of Equal Employment Opportunity and strive to provide accessibility accommodations to applicants with physical and/or mental disabilities. If you require assistance during the application process, please contact the Recruiting Operations Team at Careers@sap.com. SAP is an equal opportunity employer and affirmative action employer. We value diversity and invest in our employees to inspire confidence and unlock their full potential. We are dedicated to unleashing all talents and creating a more equitable world.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
We are looking for a skilled and enthusiastic Applied AI/ML Engineer to be a part of our team. As an Applied AI/ML Engineer, you will be responsible for leading the entire process of foundational model development, focusing on cutting-edge generative AI techniques. Your main objective will be to implement efficient learning methods for data and compute, specifically addressing challenges relevant to the Indian scenario. Your tasks will involve optimizing model training and inference pipelines, deploying production-ready models, ensuring scalability through distributed systems, and fine-tuning models for domain adaptation. Collaboration with various teams will be essential as you work towards building strong AI stacks and seamlessly integrating them into production pipelines. Apart from conducting research and experiments, you will be crucial in converting advanced models into operational systems that generate tangible results. Your leadership in this field will involve working closely with technical team members and subject matter experts, documenting technical processes, and maintaining well-structured codebases to encourage innovation and reproducibility. This position is perfect for proactive individuals who are passionate about spearheading significant advancements in generative AI and implementing scalable solutions for real-world impact. Your responsibilities will include: - Developing and training foundational models across different modalities - Managing the end-to-end lifecycle of foundational model development, from data curation to model deployment, through collaboration with core team members - Conducting research to enhance model accuracy and efficiency - Applying state-of-the-art AI techniques in Text/Speech and language processing - Collaborating with cross-functional teams to construct robust AI stacks and smoothly integrate them into production pipelines - Creating pipelines for debugging, CI/CD, and observability of the development process - Demonstrating project leadership and offering innovative solutions - Documenting technical processes, model architectures, and experimental outcomes, while maintaining clear and organized code repositories To be eligible for this role, you should hold a Bachelor's or Master's degree in a related field and possess 2 to 5 years of industry experience in applied AI/ML. Minimum requirements for this position include proficiency in Python programming and familiarity with 3-4 tools from the specified list below: - Foundational model libraries and frameworks (TensorFlow, PyTorch, HF Transformers, NeMo, etc) - Experience with distributed training (SLURM, Ray, Pytorch DDP, Deepspeed, NCCL, etc) - Inference servers (vLLM) - Version control systems and observability (Git, DVC, MLFlow, W&B, KubeFlow) - Data analysis and curation tools (Dask, Milvus, Apache Spark, Numpy) - Text-to-Speech tools (Whisper, Voicebox, VALL-E (X), HuBERT/Unitspeech) - LLMOPs Tools, Dockers, etc - Hands-on experience with AI application libraries and frameworks (DSPy, Langgraph, langchain, llamaindex, etc),
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You are the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From your unmatched range of services, you provide stability, security, and improved business performance, freeing your clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in you. You believe that success is driven by passion and purpose. Your passion for technology is only surpassed by your commitment to empowering your employees around the world. You have an exciting Opportunity for a Cloud Data Engineer. This full-time position is open for an experienced Senior Data Engineer that will support several of your clients" systems. Client satisfaction is your primary objective; all available positions are customer-facing requiring excellent communication and people skills. A positive attitude, rigorous work habits, and professionalism in the workplace are a must. Fluency in English, both written and verbal, is required. This is an Onsite role. As a senior cloud data engineer with 7+ years of experience, you will have strong knowledge and hands-on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache Spark, and Snowflake Datawarehouse, Azure Fabric. It is good to have experience with Azure Databricks, Azure Cosmos DB, Azure AI, and developing cloud-based applications. You should be able to analyze problems and provide solutions, design, implement, and manage data warehouse solutions using Azure Synapse Analytics or similar technologies, migrate data from On-Premises to Cloud, and proficiency in data modeling techniques. Your responsibilities include designing and developing ETL/ELT processes to move data between systems and transform data for analytics, strong programming skills in languages such as SQL, Python, or Scala, developing and maintaining data pipelines, experience in at least one of the reporting tools such as Power BI/Tableau, working effectively in a team environment, communicating complex technical concepts to non-technical stakeholders, managing and optimizing databases, understanding business requirements, converting them to technical design for implementation, performing analysis, developing and testing code, designing and developing cloud-based applications using Python on a serverless framework, troubleshooting skills, creating, maintaining, and enhancing applications, working independently as an individual contributor, and following Agile Methodology (SCRUM). You have experience in developing cloud-based data applications, hands-on experience in Azure data services, data warehousing, ETL, understanding cloud architecture principles, best practices, developing pipelines using ADF, Synapse, migrating data from On-Premises to Cloud, writing complex SQL scripts, transformations, analyzing problems, providing solutions, knowledge in CI/CD pipelines, Python, and API Gateway. Product Management/BA experience is a nice-to-have. Your culture is all about connection - connection with your clients, your technology, and most importantly with each other. In addition to working with an amazing team around the world, you also offer a competitive compensation package. If someone believes they would be a great fit and are ready for their best job ever, you would like to hear from them. Love Your Job, Share Your Technology Passion, Create Your Future Here!,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
karnataka
On-site
We help the world run better. At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. You will work with a team of highly technical and motivated developers. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Design and implement scalable and efficient code, following best practices and industry standards. Own your code across every stage of the software development lifecycle including ideation, design, development, quality, and customer support. Build an end-to-end understanding of features across various components. Adhere to the agile engineering practices and processes followed by the team. Create and maintain detailed technical documentation and design specifications. Mentor junior developers and provide guidance on best practices and technical approaches. Engineering graduate with over 9 years of overall software development experience. Expertise in web-based cloud applications using the Javascript technology stack (Javascript, Typescript, NodeJs, React, Angular, etc). Proven broad and deep experience in all aspects of software development (like design, architecture, object-oriented programming). Strong analytical skills to analyze complex problems and propose solutions. Willingness to continuously learn both new technologies and business processes. Experience in cloud development process and microservice architecture. Excellent collaboration and communication skills, with experience working with teams across multiple time zones. Hands-on experience in Automation testing with Mocha, Cypress automation, or equivalent. Good debugging and troubleshooting skills. Experience in SAP UI5, SQL, RDBSM, or HANA would be an added advantage. Support organizational initiatives like hiring, representing the team in internal and external forums, mentoring others, etc. Knowledge in Apache Spark and Python is an added advantage. The Business Data Cloud (BDC) organization provides customers with a unified data management layer that serves as a gateway to all application data via curated data products. This includes the ability to connect and harmonize non-application data sources. The organization offers a Business Data Fabric and Insight Applications that includes data warehousing, cross-analytical, planning, and benchmarking capabilities to derive and automate actionable insights within customers" business processes. It is open for customer and partner extensions and designed as an ease-of-use cloud service for all enterprise lines of business. SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. SAP's culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone - regardless of background - feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com. Successful candidates might be required to undergo a background verification with an external vendor.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
At Crimson Enago, we are dedicated to developing AI-powered tools and services that enhance the productivity of researchers and professionals. We understand that the stages of knowledge discovery, acquisition, creation, and dissemination can be cognitively demanding and interconnected. This is why our flagship products, Trinka and RAx, have been designed to streamline and accelerate these processes. Trinka, available at www.trinka.ai, is an AI-powered English grammar checker and language enhancement writing assistant specifically tailored for academic and technical writing. Developed by linguists, scientists, and language enthusiasts, Trinka is capable of identifying and correcting numerous intricate writing errors, ensuring your content is error-free. It goes beyond basic grammar correction by addressing contextual spelling mistakes, advanced grammar errors, enhancing vocabulary usage, and providing real-time writing suggestions. With subject-specific correction features, Trinka ensures that your writing is professional, concise, and engaging. Moreover, Trinka's Enterprise solutions offer unlimited access and customizable options to leverage its full capabilities. RAx, the first smart workspace available at https://raxter.io, is designed to assist researchers (including students, professors, and corporate researchers) in optimizing their research projects. Powered by proprietary AI algorithms and innovative problem-solving approaches, RAx aims to become the go-to workspace for research-intensive projects. By bridging information sources such as research papers, blogs, wikis, books, courses, and videos with user behaviors like reading, writing, annotating, and discussing, RAx uncovers new insights and opportunities in the academic realm. Our team consists of passionate researchers, engineers, and designers who share a common goal of revolutionizing research-intensive project workflows. We are committed to reducing cognitive load and facilitating the conversion of information into knowledge. The engineering team is dedicated to creating a scalable platform that manages vast amounts of data, implements AI processing, and caters to users worldwide. We firmly believe that research plays a crucial role in shaping a better world and strive to make the research process accessible and enjoyable. As an SDE-3 Fullstack at Trinka (https://trinka.ai), you will lead a team of web developers, drive end-to-end project development, and collaborate with key stakeholders such as the Engineering Manager, Principal Engineer, and Technical Project Manager. Your responsibilities will include hands-on coding, team leadership, hiring, training, and ensuring project delivery. We are looking for an SDE-3 Fullstack with at least 5 years of enterprise frontend-full-stack web experience, focusing on the AngularJS-Java-AWS stack. Ideal candidates will possess excellent research skills, advocate for comprehensive testing practices, demonstrate strong software design patterns, and exhibit expertise in optimizing scalable solutions. Additionally, experience with AWS technologies, database management, frontend development, and collaboration within a team-oriented environment is highly valued. If you meet the above requirements and are enthusiastic about contributing to a dynamic and innovative team, we invite you to join us in our mission to simplify and revolutionize research-intensive projects. Visit our websites at: https://www.trinka.ai/, https://raxter.io/, https://www.crimsoni.com/,
Posted 1 week ago
4.0 - 8.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
You are an experienced Data Engineer proficient in Databricks, PySpark, and Python, seeking to join a healthcare data-focused team. Your responsibilities will include designing and maintaining scalable data pipelines and solutions for complex healthcare datasets, utilizing Azure Cloud and advanced distributed data processing frameworks. Strong client-facing communication skills are crucial as you will engage directly with US-based stakeholders. Your key responsibilities will involve designing, developing, and maintaining large-scale data processing systems using Databricks and PySpark. You will build and optimize robust, scalable data pipelines for data ingestion, cleaning, transformation, and storage from diverse sources. Collaborating with business and technical stakeholders to analyze requirements and translate them into technical solutions is an essential part of your role. Troubleshooting and enhancing the performance of distributed data pipelines and ensuring adherence to healthcare data governance, privacy, and security standards are also key responsibilities. Operating effectively in an offshore setup, supporting overlap with US West Coast working hours, and delivering clear technical documentation while participating in agile team processes are part of your day-to-day tasks. Your must-have skills include hands-on experience with Databricks for production data pipeline development, proficiency in Apache Spark/PySpark and distributed data processing, advanced Python scripting for data engineering tasks, experience with Azure Data Lake, Azure Storage, or related cloud data services, healthcare data knowledge including compliance requirements and data standards, and strong client-facing verbal and written communication skills with prior client interaction. You hold a Bachelors degree in Computer Science, Engineering, Data Science, or equivalent experience. Preferred qualifications for this role include experience with large healthcare datasets (claims, EMR, HL7, FHIR, etc.), exposure to CI/CD pipelines for data engineering workflows, familiarity with Delta Lake, ML integrations in Databricks, experience with Power BI or similar reporting tools, and relevant certifications such as Azure Data Engineer Associate. This is a remote position with potential for a hybrid engagement, working from Coimbatore, Bangalore, or Hyderabad. The work timing is from 3:00 pm to 11:00 pm IST with 23 hours overlap with US West Coast. The salary range for this role is between 8 LPA to 11 LPA, and the required experience is 4 to 5 years. Your skills should align with databricks, azure data lake, HL7, Apache Spark, Python, Azure Storage, CI/CD pipelines, Power BI, FHIR, Delta Lake, healthcare data knowledge, EMR, PySpark, ML integrations.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You will need to have a total of 8 to 12 years of experience in the IT/Software/BFSI/Banking/Fintech industry. Your work arrangement will involve 5 days of working from the office in Noida. Your main responsibilities will include meeting with technology managers and the design team to understand the company's goals, leading a technical team, capturing requirements, designing technical solutions using the Spring Framework, examining and defining current architecture systems, designing scalable architecture systems for Java-based applications, reviewing code, troubleshooting design flaws, and ensuring the quality and performance of the technical solution. You should have advanced knowledge of software architecture, design, web programming, software network implementation, Java, Spring boot, Spring Cloud, multithreading, distributed and scalable architecture, Apache Spark, Data Science, ML & AI, RDBMS (Oracle, MySQL), NoSQL (MongoDB, Neo4J, Cassandra), ability to solve complex software system issues, ability to present technical information clearly, entrepreneurial skills, detail-oriented and organized, strong time management skills, influencing skills, excellent communication and interpersonal skills, a collaborative approach, and a Bachelor's degree in software engineering or computer science. In addition, you should contribute to the development process by implementing PoCs, standardizing software delivery using DevOps practices, overseeing the progress of the development team, and assisting the software design team with application integration. You are expected to update your job knowledge by participating in educational opportunities, reading professional publications, and engaging in professional organizations.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Spark and Scala Developer at Infosys, you will play a crucial role in facilitating digital transformation for our clients within a global delivery model. Your responsibilities will include conducting independent research on technologies, providing recommendations for suitable solutions, and contributing to technology-specific best practices and standards. It will be essential for you to effectively interact with key stakeholders and utilize your technical expertise across various stages of the Software Development Life Cycle. As part of our learning culture, teamwork and collaboration are highly encouraged, excellence is acknowledged, and diversity is both respected and valued. Required Qualifications: - Must be located within commuting distance of Raleigh, NC, Charlotte, NC, or Richardson, TX, or be open to relocating to these areas. - A Bachelor's degree or foreign equivalent from an accredited institution is required. Alternatively, three years of progressive experience in the specialty can be considered in place of each year of education. - All candidates authorized to work in the United States are welcome to apply. - Minimum of 4 years of experience in Information Technology. - Profound understanding of distributed computing principles and big data technologies. - At least 3 years of hands-on experience working with Apache Spark, Scala, Spark SQL, and Starburst. - Knowledge of data serialization formats like Parquet, Avro, or ORC. - Familiarity with data processing and transformation techniques. Preferred Qualifications: - Hands-on experience with data lakes, data warehouses, and ETL processes. - Solid comprehension of Agile software development frameworks. - Previous experience in the Banking domain. - Exceptional communication and analytical skills. - Ability to collaborate in teams within a diverse, multi-stakeholder environment involving Business and Technology teams. - Willingness and experience to work in a global delivery environment. This role may involve prolonged periods of sitting and computer work. Effective communication via telephone, email, or face-to-face interactions is essential. Travel might be necessary based on job requirements. About Us: Infosys is a renowned global leader in next-generation digital services and consulting. We assist clients in over 50 countries in navigating their digital transformation journey. With more than four decades of experience in managing the systems and operations of global enterprises, we expertly guide our clients through their digital evolution. By empowering enterprises with an AI-powered core to prioritize change execution and delivering agile digital solutions at scale, we aim to achieve exceptional levels of performance and customer satisfaction. Our commitment to continuous improvement is driven by an always-on learning agenda, enabling the transfer of digital skills, expertise, and innovative ideas from our thriving innovation ecosystem.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is a role at an intermediate level where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include utilizing your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas to identify and define necessary system enhancements. This will involve using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. You will also need to apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. In this role, you will need to identify problems, analyze information, make evaluative judgments to recommend and implement solutions, resolve issues by identifying and selecting solutions through the application of acquired technical experience, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. Additionally, you will act as a Subject Matter Expert to senior stakeholders and/or other team members. You should have 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark and Scala. It is essential to have a strong hold on Spark-core, Spark-SQL, and Spark Streaming, along with strong programming skills in Scala, Java, or Python. Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc., proficiency in SQL, experience with relational databases (Oracle/PL-SQL), and familiarity with data warehousing concepts and ETL processes are required. You should also have experience in performance tuning of large technical solutions, knowledge of data modeling, data architecture, data integration techniques, and best practices for data security, privacy, and compliance. Furthermore, experience with JAVA, Web services, Microservices, SOA, Apache Spark, Hive, SQL, and the Hadoop ecosystem is necessary. You should have experience with developing frameworks and utility services, delivering high-quality software following continuous delivery, and using code quality tools. Experience in creating large-scale, multi-tiered, distributed applications with Hadoop and Spark, as well as knowledge of implementing different data storage solutions, is also expected. The ideal candidate will have a Bachelor's degree or University degree or equivalent experience. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as required.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You will be responsible for conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also monitor and control all phases of the development process including analysis, design, construction, testing, and implementation. Providing user and operational support on applications to business users will be part of your responsibilities. Your role will require you to utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgments. You will also recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Additionally, you will need to consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. You will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. The ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members is essential for this role. Furthermore, you will be required to appropriately assess risk when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. You will also need to communicate with different tech and functional teams across the globe, with minimal interaction with external clients. Availability during the second shift supporting Indian timings and the ability to attend calls during the first half of the US Time zone is necessary for this role. To qualify for this position, you must have 8+ years of application/software development/maintenance experience, with at least 5 years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop. Proficiency in Python, Java, or Scala programming language is required, along with experience in JAVA, Web services, XML, Java Script, Micro services, SOA, and technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem. Experience in developing frameworks and utility services, creating large-scale distributed applications with Hadoop and Spark, and implementing different data storage solutions such as RDBMS, Hive, HBase, and NoSQL databases is essential. Strong analytical and communication skills are also necessary for this role. A Bachelor's degree or equivalent experience is required for this role. Additionally, experience in the banking domain, hands-on experience with cloud technologies, AI/ML integration, and creation of data pipelines, as well as knowledge of vendor products like Tableau, Arcadia, Paxata, KNIME, and API development, are considered advantageous. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as required.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
About the job: At Citi, we're not just building technology, we're building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with a laser focus on evolving the ways of doing things. As one of the world's most global banks, we're changing how the world does business. Shape your Career with Citi We're currently looking for a high-caliber professional to join our team as AVP- Data Engineer based in Pune, India. Being part of our team means that we'll provide you with the resources to meet your unique needs, empower you to make healthy decisions, and manage your financial well-being to help plan for your future. For instance: - We provide programs and services for your physical and mental well-being, including access to telehealth options, health advocates, confidential counseling, and more. Coverage varies by country. - We empower our employees to manage their financial well-being and help them plan for the future. - We provide access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. In this role, you're expected to: Responsibilities: - Data Pipeline Development, Design & Automation: - Design and implement efficient database structures to ensure optimal performance and support analytics. - Design, implement, and optimize secure data pipelines to ingest, process, and store large volumes of structured and unstructured data from diverse sources, including vulnerability scans, security tools, and assessments. - Work closely with stakeholders to provide clean, structured datasets that enable advanced analytics and insights into cybersecurity risks, trends, and remediation activities. Technical Competencies: - 7+ years of Hands-on experience with Scala & Hands-on experience with Spark. - 10+ years of experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Scala. - Good experience in Big Data technologies (HDFS, Hive, Apache Spark, Spark-SQL, Spark Streaming, Spark jobs optimization & Kafka). - Good knowledge of Exposure to various file formats (JSON, AVRO, Parquet). - Knowledge of agile (scrum) development methodology is a plus. - Strong development/automation skills. - Right attitude to participate and contribute through all phases of Development Lifecycle. - Secondary Skillset: No SQL, Starburst, Python. - Optional: Java Spring, Kubernetes, Docker. Competencies (Soft skills): - Strong communication skills. - Candidate should be responsible for reporting to both business and technology senior management. - Need to work with stakeholders and keep them updated on developments, estimation, delivery, and issues. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
delhi
On-site
As an experienced data analytics professional with 1 to 2 years of experience, you will be responsible for developing and implementing data analytics methodologies. Your role will require good interpersonal skills along with excellent communication abilities. Your technical skills must include proficiency in Python, machine learning, deep learning, data wrangling, and integration with Big Data tools such as Hadoop, Scoop, Impala, Hive, Pig, and Spark R. You should also have a solid understanding of statistics, data mining, algorithms, time series analysis, forecasting, SQL queries, and Tableau data visualization. Having a good grasp of technologies like Hadoop, HBase, Hive, Pig, Mapreduce, Python, R, Java, Apache Spark, Impala, and machine learning algorithms is essential for this role. Your responsibilities will involve developing training content on Big Data and Hadoop technologies for students, working professionals, and corporates. You will conduct both online and classroom training sessions, provide practical use cases and assignments, and design self-paced recorded training sessions. It's important to continuously enhance teaching methodologies for an effective online learning experience and work collaboratively in small teams to make a significant impact. You will be tasked with designing and overseeing the development of real-time projects to provide practical exposure to the trainees. Additionally, you may work as a consultant or architect in developing and training real-time Big Data applications for corporate clients either on a part-time or full-time basis. Hands-on knowledge of tools like Anaconda Navigator, Jupyter Notebook, Hadoop, Hive, Pig, Mapreduce, Apache Spark, Impala, SQL, and Tableau will be required to excel in this role.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
rajkot, gujarat
On-site
As a Software Developer at our company, you will collaborate with client developers & off-shore developers to deliver project tasks efficiently. Your responsibilities will include analyzing system domain and client requirements, creating and maintaining a robust system to support business operations with a focus on usability, and ensuring the completion of necessary milestones within the constraints of planned projects. You should possess very strong knowledge in Object-Oriented Programming (OOP) and Core Java, along with hands-on experience in JavaEE and Springboot. Experience in technologies such as Spring, Spring Security, Hibernate, JPA, and Restful services is required. Additionally, familiarity with web technologies like jQuery, Angular, React, Validation Engine, JSON, GSON, Ajax, CSS, and HTML5 is essential. The ideal candidate will demonstrate excellent debugging and problem-solving skills, along with a good understanding of Oracle, Postgres, MySQL, and MSSQL Server databases. Knowledge of Microservices Architecture & Design Principles, build tools like Maven or Gradle, application security fundamentals, performance tuning, scalability, and code versioning tools such as Git, SVN, and TFS is crucial. It is desirable for candidates to have knowledge and command over big data technologies like Apache Hadoop, Apache Spark, and AWS EMR. Proficiency in front-end JS frameworks like ReactJS and Angular, as well as knowledge of search engines such as Apache Solr, Elasticsearch, and AWS OpenSearch, are considered advantageous. Familiarity with cloud platforms is also a plus. To excel in this role, you must demonstrate a proactive nature of working with ownership, strong problem-solving skills, and excellent written and verbal communication abilities. If you meet these qualifications and are excited about this opportunity, please send your resume to hr@prominentpixel.com.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
vadodara, gujarat
On-site
As a Lead Data Engineer at Rearc, you will play a crucial role in establishing and maintaining technical excellence within our data engineering team. Your extensive experience in data architecture, ETL processes, and data modeling will be key in optimizing data workflows for efficiency, scalability, and reliability. Collaborating closely with cross-functional teams, you will design and implement robust data solutions that align with business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders is essential as you drive data-driven initiatives and ensure their successful implementation. With over 10 years of experience in data engineering or related fields, you bring a wealth of expertise in managing and optimizing data pipelines and architectures. Your proficiency in Java and/or Python, along with experience in data pipeline orchestration using platforms like Airflow, Databricks, DBT, or AWS Glue, will be invaluable. Hands-on experience with data analysis tools and libraries such as Pyspark, NumPy, Pandas, or Dask is required, while proficiency with Spark and Databricks is highly desirable. Your proven track record of leading complex data engineering projects, coupled with hands-on experience in ETL processes, data warehousing, and data modeling tools, enables you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices, as well as a strong understanding of cloud-based data services and technologies like AWS Redshift, Azure Synapse Analytics, and Google BigQuery. Your strategic and analytical skills will enable you to solve intricate data challenges and drive data-driven decision-making. In this role, you will collaborate with stakeholders to understand data requirements and challenges, implement data solutions with a DataOps mindset using modern tools and frameworks, lead data engineering projects, mentor junior team members, and promote knowledge sharing through technical blogs and articles. Your exceptional communication and interpersonal skills will facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. At Rearc, we empower engineers to build innovative products and experiences by providing them with the best tools possible. If you are a cloud professional with a passion for problem-solving and a desire to make a difference, join us in our mission to solve problems and drive innovation in the field of data engineering.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for capturing user requirements and translating them into business and digitally enabled solutions across various industries. Your key responsibilities will include designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals. You will solve complex data problems to deliver insights that help the business achieve its objectives. Your tasks will involve sourcing structured and unstructured data from different touchpoints, formatting and organizing them into an analyzable format. Additionally, you will create data products for analytics team members to enhance productivity and utilize AI services like vision and translation to generate outcomes for further pipeline steps. Moreover, you will be instrumental in fostering a culture of sharing, re-use, design, and operational efficiency of data and analytical solutions. You will prepare data to establish a unified database and construct tracking solutions to ensure data quality. You should be proficient in creating production-grade analytical assets deployed using the guiding principles of CI/CD. The ideal candidate should be an expert in Python, Scala, Pyspark, Pytorch, and at least two other languages like JavaScript. You should have extensive experience in data analysis within Big Data environments, data libraries such as Pandas, SciPy, Tensorflow, Keras, and SQL. A minimum of 2-3 years of hands-on experience with these technologies is required. In addition, you should have experience working with BI tools like Tableau, Power BI, or Looker and possess a good working knowledge of key concepts in data analytics such as dimensional modeling, ETL, reporting/dashboarding, data governance, structured and unstructured data handling, and infrastructure requirements. Experience in cloud data warehouses like Redshift or Synapse and certification in AWS, Azure, Snowflake, or Databricks data analytics is preferred. The role requires 3.5 to 5 years of experience and a graduation degree. This job is based in Kochi, Coimbatore, or Trivandrum and requires expertise in Python/Scala, Pyspark/Pytorch, and familiarity with Redshift.,
Posted 1 week ago
12.0 - 17.0 years
0 Lacs
pune, maharashtra
On-site
You are seeking a skilled Azure Databricks Architect with 12 to 17 years of experience in Python, SQL. As an Azure Databricks Architect at our company, you will be responsible for data architecture, data engineering, and analytics. You should have at least 5 years of hands-on experience with Azure Databricks, Apache Spark, and Delta Lake. Your proficiency in Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure SQL is essential. Expertise in Python, Scala, and SQL for data processing, along with a deep understanding of data modeling, ETL/ELT processes, and distributed computing, are key requirements. Experience with CI/CD pipelines and DevOps practices in data engineering is also expected. Excellent communication and stakeholder management skills are crucial for this role. Possessing Azure certifications such as Azure Solutions Architect or Azure Data Engineer would be a plus. Your responsibilities will include implementing ML/AI models in Databricks, utilizing data governance tools like Purview, and working with real-time data processing using Kafka, Event Hubs, or Stream Analytics. Additionally, you will enjoy competitive salary and benefits, a culture focused on talent development, and opportunities to work with cutting-edge technologies. Employee engagement initiatives, annual health check-ups, and insurance coverage are also part of the benefits package. Persistent Ltd. is committed to fostering diversity and inclusion in the workplace. We welcome applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. Hybrid work options, flexible working hours, and accessible facilities are available to support employees with diverse needs and preferences. Our inclusive environment aims to enable all employees to thrive while accelerating growth both professionally and personally, impacting the world in positive ways, and enjoying collaborative innovation with diversity and work-life wellbeing at the core. If you are ready to unleash your full potential at Persistent, please contact pratyaksha_pandit@persistent.com. Persistent is an Equal Opportunity Employer that prohibits discrimination and harassment of any kind.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Sr. Data Engineer at Lifesight in Bangalore, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle massive amounts of data ingested daily. You will work on processing petabyte-sized data warehouses and Elasticsearch clusters, optimizing data pipelines for quality and resilience, and refining diverse datasets into simplified models to encourage self-service. Your role will involve owning data mapping, business logic, transformations, and ensuring data quality through low-level systems debugging and performance optimization on large production clusters. Additionally, you will participate in architecture discussions, influence product roadmaps, and take ownership of new projects while maintaining and supporting existing platforms and transitioning to newer technology stacks. To excel in this role, you should have proficiency in Python and PySpark, a deep understanding of Apache Spark including tuning and data frame building, and the ability to create Java/Scala Spark jobs for data transformation and aggregation. Your experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, as well as distributed environments using tools like Kafka, Spark, Hive, and Hadoop, will be invaluable. Familiarity with distributed database systems, various file formats like Parquet and Avro, and NoSQL databases is essential, along with experience in cloud platforms like AWS and GCP. Ideally, you should have at least 5 years of professional experience as a data or software engineer. Joining Lifesight means being part of a fast-growing Marketing Measurement Platform with a global impact, where you can influence key decisions on tech stack, product development, and scalable solutions. You will work in small, agile teams within a non-bureaucratic, fast-paced environment that values innovation, collaboration, and personal well-being. Competitive compensation, benefits, and a culture of empowerment that prioritizes work-life balance and team camaraderie await you at Lifesight.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Technology Lead Analyst position is a senior-level role that involves establishing and implementing new or revised application systems and programs in coordination with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. As the Applications Development Technology Lead Analyst, your responsibilities will include partnering with multiple management teams to ensure appropriate integration of functions to meet goals, identifying and defining necessary system enhancements for deploying new products and process improvements, and resolving a variety of high-impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards. You will also need to provide expertise in the area, possess advanced knowledge of applications programming, ensure application design aligns with the overall architecture blueprint, develop standards for coding, testing, debugging, and implementation, and have a comprehensive understanding of how different areas of business integrate to achieve business goals. Additionally, you will be required to provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions, serve as an advisor or coach to mid-level developers and analysts, and appropriately assess risk when making business decisions. To qualify for this role, you should have 6-10 years of relevant experience in Apps Development or systems analysis, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, and be a Subject Matter Expert (SME) in at least one area of Applications Development. Other qualifications include the ability to adjust priorities quickly, demonstrated leadership and project management skills, and clear and concise written and verbal communication. An educational background should include a Bachelor's degree/University degree or equivalent experience, with a Master's degree preferred. As a Vice President (VP), you will be responsible for leading a technical vertical (Frontend, Backend, or Data), mentoring developers, and ensuring timely, scalable, and testable delivery across your domain. Your responsibilities will involve leading a domain-specific team of 68 engineers, translating architecture into execution with detailed designs and guidance, reviewing complex components built using various programming languages and frameworks, leading data platform migration projects, integrating CI/CD pipelines, enforcing code quality, evaluating AI-based tools for productivity, testing, and code improvement, and demonstrating strong mentoring, conflict resolution, and cross-team communication skills. In terms of required skills, you should have 10-14 years of experience leading development teams and delivering cloud-native solutions, with 2 years in tech leadership, proficiency in programming languages such as Java, Python, JavaScript/TypeScript, familiarity with frameworks like Spring Boot/WebFlux, Angular, Node.js, databases like Oracle, MongoDB, Redis, strong SQL skills, cloud technologies including ECS, S3, Lambda, RDS, Kubernetes, data technologies like Apache Spark with Python, Snowflake, data migration tools, development practices such as TDD, CI/CD pipelines, Git workflows, and quality tools like SonarQube and automated testing frameworks. This job description serves as a high-level overview of the work performed and may require other job-related duties as assigned. If you require a reasonable accommodation due to a disability, please review Accessibility at Citi for assistance.,
Posted 2 weeks ago
3.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Dun & Bradstreet team, you will play a crucial role in unlocking the power of data through analytics to create a better tomorrow. Our global community of over 6,000 team members is dedicated to accelerating creativity, innovation, and growth in order to help clients turn uncertainty into confidence, risk into opportunity, and potential into prosperity. We welcome bold and diverse thinkers who are passionate about making a positive impact. You will be responsible for designing and developing data pipelines within our Big Data ecosystem using technologies such as Apache Spark and Apache Airflow. Your role will involve architecting, building, and deploying scalable and efficient data pipelines while ensuring clarity and maintainability through proper documentation. Additionally, you will demonstrate expertise in data architecture and management, including familiarity with data lakes, modern data warehousing practices, and distributed data processing solutions. Your programming and scripting skills in Python will be put to the test as you write clean, efficient, and maintainable code to support cloud-based infrastructures such as AWS and GCP. You will be tasked with managing and optimizing cloud-based data infrastructure to ensure efficient data storage and retrieval. Workflow orchestration using Apache Airflow will also be a key aspect of your responsibilities, requiring you to develop and manage workflows for scheduling and orchestrating data processing jobs. Innovation and optimization are at the core of what we do, and you will be expected to create detailed designs and proof-of-concepts to enable new workloads and technical capabilities on our platform. Collaboration with platform and infrastructure engineers will be essential to implement these capabilities in production. Your strong knowledge of Big Data architecture, coupled with hands-on experience in technologies like Hadoop, Spark, and Hive, will be invaluable in this role. To be successful in this position, you should have a minimum of 8 years of hands-on experience with Big Data technologies, including at least 3 years of experience with Spark. Hands-on experience with dataproc and managing solutions deployed in the Cloud are highly desirable. Additionally, a minimum of 6 years of experience in Cloud environments, preferably GCP, and any experience with NoSQL and Graph databases will be beneficial. Experience working in a Global company, particularly in a DevOps model, is considered a plus. If you are ready to join a dynamic team of passionate individuals who are committed to driving innovation and growth, we invite you to explore career opportunities at Dun & Bradstreet by visiting https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for contributing to the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to assist in applications systems analysis and programming activities. You will utilize your knowledge of applications development procedures and concepts, along with basic knowledge of technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will install and support customer exposure systems and apply fundamental knowledge of programming languages for design specifications. As an Intermediate Programmer Analyst, you will analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. You will be responsible for identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. Operating with a limited level of direct supervision, you will exercise independence of judgment and autonomy while acting as a subject matter expert to senior stakeholders and/or other team members. In this role, it is crucial to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big Data solutions using Apache Spark and Scala is required - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications - Familiarity with data warehousing concepts and ETL processes - Knowledge of data modeling, data architecture, and data integration techniques - Experience with Java, Web services, XML, JavaScript, Microservices, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, logging/monitoring, and high-quality software delivery - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS, Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree or equivalent experience This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Intermediate Programmer Analyst position. Other job-related duties may be assigned as required.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
You will be responsible for designing and building backend components of our MLOps platform in Python on AWS. This includes collaborating with geographically distributed cross-functional teams and participating in an on-call rotation with the rest of the team to handle production incidents. To be successful in this role, you should have at least 3+ years of professional backend development experience with Python. Additionally, you should have experience with web development frameworks such as Flask or FastAPI, as well as working with WSGI & ASGI web servers like Gunicorn and Uvicorn. Experience with concurrent programming designs such as AsyncIO, containers (Docker), AWS ECS or AWS EKS, unit and functional testing frameworks, and public cloud platforms like AWS is also required. Nice-to-have skills include experience with Apache Kafka and developing Kafka client applications in Python, MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow, big data processing frameworks like Apache Spark, DevOps & IaC tools such as Terraform and Jenkins, various Python packaging options like Wheel, PEX, or Conda, and metaprogramming techniques in Python. You should hold a Bachelor's degree in Computer Science, Information Systems, Engineering, Computer Applications, or a related field. In addition to competitive salaries and benefits packages, Nisum India offers its employees continuous learning opportunities, parental medical insurance, various activities for team building, and free meals including snacks, dinner, and subsidized lunch.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer - Backend (Python) with over 7 years of experience, you will be based in Hyderabad and play a crucial role in developing the backend components of the GenAI Platform. Your responsibilities will include designing and constructing backend features for the platform on AWS, collaborating with cross-functional teams spread across different locations, and participating in an on-call rotation for managing production incidents. To excel in this role, you must possess the following skills: - A minimum of 7 years of professional experience in backend web development using Python. - Proficiency in AI, RAG, DevOps, and Infrastructure as Code (IaC) tools like Terraform and Jenkins. - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow. - Expertise in web development frameworks like Flask, Django, or FastAPI. - Knowledge of concurrent programming concepts like AsyncIO. - Experience with public cloud platforms such as AWS, Azure, or GCP, preferably AWS. - Understanding of CI/CD practices, tools, and frameworks. Additionally, the following skills would be advantageous: - Experience with Apache Kafka and developing Kafka client applications using Python. - Familiarity with big data processing frameworks, particularly Apache Spark. - Proficiency in containers (Docker) and container platforms like AWS ECS or AWS EKS. - Expertise in unit and functional testing frameworks. - Knowledge of various Python packaging options such as Wheel, PEX, or Conda. - Understanding of metaprogramming techniques in Python. Join our team and contribute to creating a safe, compliant, and efficient access platform for LLMs, leveraging both Opensource and Commercial resources while adhering to Experian standards and policies. Be a part of a dynamic environment where you can utilize your expertise to build innovative solutions and drive the growth of the GenAI Platform.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
The ideal candidate should have 3-5 years of experience in implementing scalable and sustainable data engineering solutions using tools such as Databricks, Snowflake, Teradata, Apache Spark, and Python. Your responsibilities will include creating, maintaining, and optimizing data pipelines as workloads transition from development to production for specific use cases. You will be responsible for end-to-end development, including coding, testing, debugging, and deployment. Automation is key, and you will drive the use of modern tools and techniques to automate repetitive data preparation and integration tasks to enhance productivity. You will be mapping data between source systems, data warehouses, and data marts, as well as training counterparts in data pipelining and preparation techniques. Collaboration is essential, and you will interface with other technology teams to extract, transform, and load data from various sources. You will also play a crucial role in promoting data and analytics capabilities to business unit leaders, educating them on leveraging these capabilities to achieve their business goals. You should be proficient in converting SQL queries into Python code running on a distributed system and developing libraries for code reusability. An eagerness to learn new technologies in a fast-paced environment and excellent communication skills are essential for this role. Experience with data pipeline and workflow management tools such as Rundeck and Airflow, AWS cloud services like EC2, EMR, RDS, Redshift, and stream-processing systems like Spark-Streaming would be advantageous.,
Posted 3 weeks ago
5.0 - 12.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Software Engineer, you will be responsible for utilizing your 5-12 years of experience in Big Data & Data-related technologies to contribute to the success of projects in Chennai and Coimbatore in a Hybrid work mode. You should possess an expert level understanding of distributed computing principles and a strong knowledge of Apache Spark, with hands-on programming skills in Python. Your role will involve working with technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming to build stream-processing systems. You should have a good grasp of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources including RDBMS, ERP, and Files. Experience with NoSQL databases such as HBase, Cassandra, MongoDB, and knowledge of ETL techniques and frameworks will be essential for this role. You will be tasked with performance tuning of Spark Jobs, working with AZURE Databricks, and leading a team efficiently. Additionally, your expertise in designing and implementing Big Data solutions, along with a strong understanding of SQL queries, joins, stored procedures, and relational schemas will be crucial. As a practitioner of AGILE methodology, you will play a key role in the successful delivery of data-driven projects.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City