Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in data, analytics and technology solutions at PwC will assist clients in developing solutions that help build trust, drive improvement, and detect, monitor, and predict risk. Your work will involve using advanced analytics, data wrangling technology, and automation tools to leverage data and focus on establishing the right processes and structures to enable our clients to make efficient and effective decisions based on accurate information that is complete and trustworthy. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Requirements Preferred Knowledge/Skills: Ability to leverage and possess hands-on working knowledge of visualization software such as Tableau, Qlik, and Power BI. Applying knowledge of data analysis and manipulation products like SQL, Alteryx, Python, and Databricks. Engaging in regulation, workforce, financial reporting, and automation. Managing complex internal and external stakeholder relationships. Thriving in a dynamic consulting environment, with a desire to grow within this setting. Managing client engagements and internal projects, including budgets, risks, and quality assurance. Preparing reports and deliverables for clients and other stakeholders. Developing and maintaining internal and external relationships. Identifying and pursuing business opportunities, supporting line management in proposal development, and managing, coaching, and supporting team members. Supporting Engagement Managers with engagement scoping and planning activities. Coaching team members in task completion. Performing advanced data analysis to support test procedures. Collaborating effectively with local and regional teams and clients. Supporting Engagement Managers in drafting client deliverables for review by Engagement Leaders. Managing project economics for engagement teams. Conducting basic review activities and providing coaching to junior team members. Good to Have Advanced knowledge and understanding of financial risk management, operational risk management, and compliance requirements. Proficiency in data analytics tools (e.g., Alteryx, Power BI) and Microsoft suite tools (e.g., Word, Excel, PowerPoint). Experience with major ERPs such as SAP, Oracle, and/or technology security management. Programming skills in SQL, Python, or R. Accounting experience and consulting experience. Knowledge in financial services is preferable. Strong analytical skills with high attention to detail and accuracy. Excellent verbal, written, and interpersonal communication skills. Ability to work both independently and within a team environment. Education/Qualification - Bachelor’s or Master’s degree in Engineering and Business, Financial Mathematics, Mathematical Economics, Quantitative Finance, Statistics, or a related field. Level of experience - More than 4 years of experience in relevant roles, preferably in a public accounting firm or a large corporation Preferred - More than 3 years of assurance experience in internal controls and/or business process testing. Experience in technology risk (e.g., IT General Controls, information security). Previous experience in shared service delivery centers. Certifications such as CIA, CISA, or ITIL are preferred. CPA or equivalent certification Show more Show less
Posted 6 days ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in data, analytics and technology solutions at PwC will assist clients in developing solutions that help build trust, drive improvement, and detect, monitor, and predict risk. Your work will involve using advanced analytics, data wrangling technology, and automation tools to leverage data and focus on establishing the right processes and structures to enable our clients to make efficient and effective decisions based on accurate information that is complete and trustworthy. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Requirements Preferred Knowledge/Skills: Assist in collecting, cleaning, and processing data from various sources to support business objectives. Conduct exploratory data analysis to identify trends, patterns, and insights that drive strategic decision-making. Collaborate with team members to design and implement data models and visualizations using tools such as Excel, SQL, Python or Power Bi. Support the preparation of reports and presentations that communicate findings and insights to stakeholders in a clear and concise manner. Participate in the development and maintenance of documentation and data dictionaries to ensure data integrity and governance. Work with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay updated with industry trends and best practices in data analytics and contribute ideas for continuous improvement. Good To Have Experience in a similar role in their current profile. Good accounting knowledge and experience in dealing with financial data are a plus. Knowledge of Azure Databricks / Alteryx / Python / SAS / Knime. Familiarity with data analysis tools and programming languages (e.g., Excel, SQL, Python, Databricks). Basic understanding of Power BI data visualization techniques and tools Strong analytical and problem-solving skills with attention to detail. Education Bachelor’s degree in a related field such as Data Science, Statistics, Mathematics, Computer Science, Economics, or equivalent experience. More than 1 year of experience in data analytics, data science, or a related role. Excellent verbal and written communication skills. Ability to work collaboratively in a team environment and manage multiple tasks efficiently. Eagerness to learn and adapt to new technologies and methodologies. CPA or equivalent certification Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include: Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile Job Description * Job Title: ETL Ingestion Engineer (Azure Data Factory) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 2–5 years About The Role We are looking for a talented ETL Ingestion Engineer with hands-on experience in Azure Data Factory (ADF) to join our Data Engineering team. An individual will be responsible for building, orchestrating, and maintaining robust data ingestion pipelines from various source systems into our data lake or data warehouse environments. Key Responsibilities Design and implement scalable data ingestion pipelines using Azure Data Factory (ADF). Extract data from a variety of sources such as SQL Server, flat files, APIs, and cloud storage. Develop ADF pipelines and data flows to support both batch and incremental loads. Ensure data quality, consistency, and reliability throughout the ETL process. Optimize ADF pipelines for performance, cost, and scalability. Monitor pipeline execution, troubleshoot failures, and ensure data availability meets SLAs. Document pipeline logic, source-target mappings, and operational procedures. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in ETL development and data pipeline implementation. Strong hands-on experience with Azure Data Factory (ADF), including linked services, datasets, pipelines, and triggers. Proficiency in SQL and working with structured and semi-structured data (CSV, JSON, Parquet). Experience with Azure storage systems (ADLS Gen2, Blob Storage) and data movement. Familiarity with job monitoring and logging mechanisms in Azure. Preferred Skills Experience with Azure Data Lake, Synapse Analytics, or Databricks. Exposure to Azure DevOps for CI/CD in data pipelines. Understanding of data governance, lineage, and compliance requirements (GDPR, HIPAA, etc.). Knowledge of RESTful APIs and API-based ingestion. Job Title: ETL Lead – Azure Data Factory (ADF) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 5+ years About The Role We are seeking an experienced ETL Lead with strong expertise in Azure Data Factory (ADF) to lead and oversee data ingestion and transformation projects across the organization. The role demands a mix of technical proficiency and leadership to design scalable data pipelines, manage a team of engineers, and collaborate with cross-functional stakeholders to ensure reliable data delivery. Key Responsibilities Lead the design, development, and implementation of end-to-end ETL pipelines using Azure Data Factory (ADF). Architect scalable ingestion solutions from multiple structured and unstructured sources (e.g., SQL Server, APIs, flat files, cloud storage). Define best practices for ADF pipeline orchestration, performance tuning, and cost optimization. Mentor, guide, and manage a team of ETL engineers—ensuring high-quality deliverables and adherence to project timelines. Work closely with business analysts, data modelers, and source system owners to understand and translate data requirements. Establish data quality checks, monitoring frameworks, and alerting mechanisms. Drive code reviews, CI/CD integration (using Azure DevOps), and documentation standards. Own delivery accountability across multiple ingestion and data integration workstreams. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or related discipline. 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Deep understanding of data ingestion, transformation, and warehousing best practices. Strong SQL skills and experience with cloud-native data storage (ADLS Gen2, Blob Storage). Proficiency in orchestrating complex data flows, parameterized pipelines, and incremental data loads. Experience in handling large-scale data migration or modernization projects. Preferred Skills Familiarity with modern data platforms like Azure Synapse, Snowflake, Databricks. Exposure to Azure DevOps pipelines for CI/CD of ADF pipelines and linked services. Understanding of data governance, security (RBAC), and compliance requirements. Experience leading Agile teams and sprint-based delivery models. Excellent communication, leadership, and stakeholder management skills. Show more Show less
Posted 6 days ago
2.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile About The Role We are seeking a highly skilled and experienced Data Engineer & Lead Data Engineer to join our growing data team. This role is ideal for professionals with 2 to 10 years of experience in data engineering, with a strong foundation in SQL, Databricks, Spark SQL, PySpark, and BI tools like Power BI or Tableau. As a Data Engineer, you will be responsible for building scalable data pipelines, optimizing data processing workflows, and enabling insightful reporting and analytics across the organization. Key Responsibilities Design and develop robust, scalable data pipelines using PySpark and Databricks. Write efficient SQL and Spark SQL queries for data transformation and analysis. Work closely with BI teams to enable reporting through Power BI or Tableau. Optimize performance of big data workflows and ensure data quality. Collaborate with business and technical stakeholders to gather and translate data requirements. Implement best practices for data integration, processing, and governance. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2–10 years of experience in data engineering or a similar role. Strong experience with SQL, Spark SQL, and PySpark. Hands-on experience with Databricks for big data processing. Proven experience with BI tools such as Power BI and/or Tableau. Strong understanding of data warehousing and ETL/ELT concepts. Good problem-solving skills and the ability to work in cross-functional teams. Nice To Have Experience with cloud data platforms (Azure, AWS, or GCP). Familiarity with CI/CD pipelines and version control tools (e.g., Git). Understanding of data governance, security, and compliance standards. Exposure to data lake architectures and real-time streaming data pipelines. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location -Bangalore(Whitefield) Work Mode - WFO during probation(3 Months), later Hybrid Role overview Experience 1. Ten plus years’ experience in software development with at least the last few years in a leadership role with progressively increasing responsibilities 2. Extensive experience in the following areas a. C#, .Net b. Designing and building cloud-native solutions (Azure, AWS, Google Cloud Platform) c. Infrastructure as Code tools (Terraform, Pulumi, cloud-specific IaC tools) d. Configuration management tools (Ansible, Chef, Salt Stack) e. Containerization and orchestration technologies (Docker, Kubernetes) f. Native and third-party Databricks integrations (Delta Live Tables, Auto Loader,Databricks Workflows / Apache Airflow, Unity Catalog) 3. Extensive experience in Azure 4. Experience designing and implementing data security and governance platform adhering to compliance standards (HIPPA, SOC 2) preferred Show more Show less
Posted 6 days ago
2.0 - 10.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile About The Role We are seeking a highly skilled and experienced Data Engineer & Lead Data Engineer to join our growing data team. This role is ideal for professionals with 2 to 10 years of experience in data engineering, with a strong foundation in SQL, Databricks, Spark SQL, PySpark, and BI tools like Power BI or Tableau. As a Data Engineer, you will be responsible for building scalable data pipelines, optimizing data processing workflows, and enabling insightful reporting and analytics across the organization. Key Responsibilities Design and develop robust, scalable data pipelines using PySpark and Databricks. Write efficient SQL and Spark SQL queries for data transformation and analysis. Work closely with BI teams to enable reporting through Power BI or Tableau. Optimize performance of big data workflows and ensure data quality. Collaborate with business and technical stakeholders to gather and translate data requirements. Implement best practices for data integration, processing, and governance. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2–10 years of experience in data engineering or a similar role. Strong experience with SQL, Spark SQL, and PySpark. Hands-on experience with Databricks for big data processing. Proven experience with BI tools such as Power BI and/or Tableau. Strong understanding of data warehousing and ETL/ELT concepts. Good problem-solving skills and the ability to work in cross-functional teams. Nice To Have Experience with cloud data platforms (Azure, AWS, or GCP). Familiarity with CI/CD pipelines and version control tools (e.g., Git). Understanding of data governance, security, and compliance standards. Exposure to data lake architectures and real-time streaming data pipelines. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Delhi, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include: Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile Job Description * Job Title: ETL Ingestion Engineer (Azure Data Factory) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 2–5 years About The Role We are looking for a talented ETL Ingestion Engineer with hands-on experience in Azure Data Factory (ADF) to join our Data Engineering team. An individual will be responsible for building, orchestrating, and maintaining robust data ingestion pipelines from various source systems into our data lake or data warehouse environments. Key Responsibilities Design and implement scalable data ingestion pipelines using Azure Data Factory (ADF). Extract data from a variety of sources such as SQL Server, flat files, APIs, and cloud storage. Develop ADF pipelines and data flows to support both batch and incremental loads. Ensure data quality, consistency, and reliability throughout the ETL process. Optimize ADF pipelines for performance, cost, and scalability. Monitor pipeline execution, troubleshoot failures, and ensure data availability meets SLAs. Document pipeline logic, source-target mappings, and operational procedures. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in ETL development and data pipeline implementation. Strong hands-on experience with Azure Data Factory (ADF), including linked services, datasets, pipelines, and triggers. Proficiency in SQL and working with structured and semi-structured data (CSV, JSON, Parquet). Experience with Azure storage systems (ADLS Gen2, Blob Storage) and data movement. Familiarity with job monitoring and logging mechanisms in Azure. Preferred Skills Experience with Azure Data Lake, Synapse Analytics, or Databricks. Exposure to Azure DevOps for CI/CD in data pipelines. Understanding of data governance, lineage, and compliance requirements (GDPR, HIPAA, etc.). Knowledge of RESTful APIs and API-based ingestion. Job Title: ETL Lead – Azure Data Factory (ADF) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 5+ years About The Role We are seeking an experienced ETL Lead with strong expertise in Azure Data Factory (ADF) to lead and oversee data ingestion and transformation projects across the organization. The role demands a mix of technical proficiency and leadership to design scalable data pipelines, manage a team of engineers, and collaborate with cross-functional stakeholders to ensure reliable data delivery. Key Responsibilities Lead the design, development, and implementation of end-to-end ETL pipelines using Azure Data Factory (ADF). Architect scalable ingestion solutions from multiple structured and unstructured sources (e.g., SQL Server, APIs, flat files, cloud storage). Define best practices for ADF pipeline orchestration, performance tuning, and cost optimization. Mentor, guide, and manage a team of ETL engineers—ensuring high-quality deliverables and adherence to project timelines. Work closely with business analysts, data modelers, and source system owners to understand and translate data requirements. Establish data quality checks, monitoring frameworks, and alerting mechanisms. Drive code reviews, CI/CD integration (using Azure DevOps), and documentation standards. Own delivery accountability across multiple ingestion and data integration workstreams. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or related discipline. 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Deep understanding of data ingestion, transformation, and warehousing best practices. Strong SQL skills and experience with cloud-native data storage (ADLS Gen2, Blob Storage). Proficiency in orchestrating complex data flows, parameterized pipelines, and incremental data loads. Experience in handling large-scale data migration or modernization projects. Preferred Skills Familiarity with modern data platforms like Azure Synapse, Snowflake, Databricks. Exposure to Azure DevOps pipelines for CI/CD of ADF pipelines and linked services. Understanding of data governance, security (RBAC), and compliance requirements. Experience leading Agile teams and sprint-based delivery models. Excellent communication, leadership, and stakeholder management skills. Show more Show less
Posted 6 days ago
2.0 - 10.0 years
0 Lacs
Delhi, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile About The Role We are seeking a highly skilled and experienced Data Engineer & Lead Data Engineer to join our growing data team. This role is ideal for professionals with 2 to 10 years of experience in data engineering, with a strong foundation in SQL, Databricks, Spark SQL, PySpark, and BI tools like Power BI or Tableau. As a Data Engineer, you will be responsible for building scalable data pipelines, optimizing data processing workflows, and enabling insightful reporting and analytics across the organization. Key Responsibilities Design and develop robust, scalable data pipelines using PySpark and Databricks. Write efficient SQL and Spark SQL queries for data transformation and analysis. Work closely with BI teams to enable reporting through Power BI or Tableau. Optimize performance of big data workflows and ensure data quality. Collaborate with business and technical stakeholders to gather and translate data requirements. Implement best practices for data integration, processing, and governance. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2–10 years of experience in data engineering or a similar role. Strong experience with SQL, Spark SQL, and PySpark. Hands-on experience with Databricks for big data processing. Proven experience with BI tools such as Power BI and/or Tableau. Strong understanding of data warehousing and ETL/ELT concepts. Good problem-solving skills and the ability to work in cross-functional teams. Nice To Have Experience with cloud data platforms (Azure, AWS, or GCP). Familiarity with CI/CD pipelines and version control tools (e.g., Git). Understanding of data governance, security, and compliance standards. Exposure to data lake architectures and real-time streaming data pipelines. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TJX Companies At TJX Companies, every day brings new opportunities for growth, exploration, and achievement. You’ll be part of our vibrant team that embraces diversity, fosters collaboration, and prioritizes your development. Whether you’re working in our four global Home Offices, Distribution Centers or Retail Stores—TJ Maxx, Marshalls, Homegoods, Homesense, Sierra, Winners, and TK Maxx, you’ll find abundant opportunities to learn, thrive, and make an impact. Come join our TJX family—a Fortune 100 company and the world’s leading off-price retailer. Job Description About TJX: At TJX, is a Fortune 100 company that operates off-price retailers of apparel and home fashions. TJX India - Hyderabad is the IT home office in the global technology organization of off-price apparel and home fashion retailer TJX, established to deliver innovative solutions that help transform operations globally. At TJX, we strive to build a workplace where our Associates’ contributions are welcomed and are embedded in our purpose to provide excellent value to our customers every day. At TJX India, we take a long-term view of your career. We have a high-performance culture that rewards Associates with career growth opportunities, preferred assignments, and upward career advancement. We take well-being very seriously and are committed to offering a great work-life balance for all our Associates. What You’ll Discover Inclusive culture and career growth opportunities A truly Global IT Organization that collaborates across North America, Europe, Asia and Australia, click here to learn more Challenging, collaborative, and team-based environment What You’ll Do The Global Supply Chain - Logistics Team is responsible for managing various supply chain logistics related solutions within TJX IT. The organization delivers capabilities that enrich the customer experience and provide business value. We seek a motivated, talented Staff Engineer with good understanding of cloud base, database and BI concepts to help architect enterprise reporting solutions across global buying, planning and allocations. What You’ll Need The Global Supply Chain - Logistics Team thrives on strong relationships with our business partners and working diligently to address their needs which supports TJX growth and operational stability. On this tightly knit and fast-paced solution delivery team you will be constantly challenged to stretch and think outside the box. You will be working with product teams, architecture and business partners to strategically plan and deliver the product features by connecting the technical and business worlds. You will need to break down complex problems into steps that drive product development while keeping product quality and security as the priority. You will be responsible for most architecture, design and technical decisions within the assigned scope. Key Responsibilities Design, develop, test and deploy AI solutions using Azure AI services to meet business requirements, working collaboratively with architects and other engineers. Train, fine-tune, and evaluate AI models, including large language models (LLMs), ensuring they meet performance criteria and integrate seamlessly into new or existing solutions. Develop and integrate APIs to enable smooth interaction between AI models and other applications, facilitating efficient model serving. Collaborate effectively with cross-functional teams, including data scientists, software engineers, and business stakeholders, to deliver comprehensive AI solutions. Optimize AI and ML model performance through techniques such as hyperparameter tuning and model compression to enhance efficiency and effectiveness. Monitor and maintain AI systems, providing technical support and troubleshooting to ensure continuous operation and reliability. Create comprehensive documentation for AI solutions, including design documents, user guides, and operational procedures, to support development and maintenance. Stay updated with the latest advancements in AI, machine learning, and cloud technologies, demonstrating a commitment to continuous learning and improvement. Design, code, deploy, and support software components, working collaboratively with AI architects and engineers to build impactful systems and services. Lead medium complex initiatives, prioritizing and assigning tasks, providing guidance, and resolving issues to ensure successful project delivery. Minimum Qualifications Bachelor's degree in computer science, engineering, or related field 8+ years of experience in data/software engineering, design, implementation and architecture. At least 5+ years of hands-on experience in developing AI/ML solutions, with a focus on deploying them in a cloud environment. Deep understanding of AI and ML algorithms with focus on Operations Research / Optimization knowledge (preferably Metaheuristics / Genetic Algorithms). Strong programming skills in Python with advanced OOPS concepts. Good understanding of structured, semi structured, and unstructured data, Data modelling, Data analysis, ETL and ELT. Proficiency with Databricks & PySpark. Experience with MLOps practices including CI/CD for machine learning models. Knowledge of security best practices for deploying AI solutions, including data encryption and access control. Knowledge of ethical considerations in AI, including bias detection and mitigation strategies. This role operates in an Agile/Scrum environment and requires a solid understanding of the full software lifecycle, including functional requirement gathering, design and development, testing of software applications, and documenting requirements and technical specifications. Fully Owns Epics with decreasing guidance. Takes initiative through identifying gaps and opportunities. Strong communication and influence skills. Solid team leadership with mentorship skills Ability to understand the work environment and competing priorities in conjunction with developing/meeting project goals. Shows a positive, open-minded, and can-do attitude. Experience In The Following Technologies Advanced Python programming (OOPS) Operations Research / Optimization knowledge (preferably Metaheuristics / Genetic Algorithms) Databricks with Pyspark Azure / Cloud knowledge Github / version control Functional knowledge on Supply Chain / Logistics is preferred. In addition to our open door policy and supportive work environment, we also strive to provide a competitive salary and benefits package. TJX considers all applicants for employment without regard to race, color, religion, gender, sexual orientation, national origin, age, disability, gender identity and expression, marital or military status, or based on any individual's status in any group or class protected by applicable federal, state, or local law. TJX also provides reasonable accommodations to qualified individuals with disabilities in accordance with the Americans with Disabilities Act and applicable state and local law. Address Salarpuria Sattva Knowledge City, Inorbit Road Location: APAC Home Office Hyderabad IN Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About WorkSpan The next era of growth is being driven by business interoperability. Cloud, genAI, solutions combining services and software- more and more, companies outpace their competition not just through building superior products, but by creating stronger partnerships, paths to market, and better business models for winning together. Cloud providers, service providers, tech partners and resellers are teaming up to win more deals together through co-selling. WorkSpan is building the world’s largest, trusted co-selling network. WorkSpan already has seven of the world’s ten largest partner ecosystems on our platform and $50B of customer pipeline under active management. AWS, Google, Microsoft, MongoDB, PagerDuty, Databricks and dozens of others trust WorkSpan to accelerate and amplify their ecosystem strategies. With a $30M series C and backing from world class investors Insight Partners, Mayfield, and M12, WorkSpan is poised to drive the future of B2B. Come be a part of it. Join Our Team For The Opportunity To Own your results and make a tangible impact on the business Develop a deep understanding of GTM working closely with leadership across sales & marketing Work with driven, passionate people every day Be a part of an ambitious, supportive team on a mission Be the end to end owner of your product Work closely with Marketing, Sales, Pre-Sales, Professional Services, Customer Success, Support, Product, and Engineering teams to support the expansion and deliver solutions for our customers Engage with and evaluate the needs of our customers and partners to discover important problems and opportunities that can be translated into the right, compelling products and features Be the voice of the customer and help provide context, empathy, and rationale behind customer needs. Less “customer wants a feature”, more “they want to solve a problem that helps with…” Create, maintain and socialize a high-level business strategy roadmap through meaningful collaboration and ruthless data driven prioritization Deliver product demos and presentations for your product to existing and prospective customers. Ensure competitive differentiation. Collaborate with UX and Engineering to deliver product delight to the market. Be passionate about building a winning product, be driven to make customers successful, and be resourceful to thrive in a startup environment. Have a bias toward action and iterate to deliver the optimal product. Your Primary Responsibilities Work closely with the management team to identify the target segment, the problems to solve and the differentiated value-propositions Collaborate effectively within the team and with design, engineering, sales, marketing, growth, and customer success. Lead product initiatives to ensure timely and high-quality delivery of the product to the market. Engage with customers and with customer-facing teams to effectively market and sell the product. Engage in user research to solidify concepts and impact growth. Have a data-oriented approach to making the tradeoffs that are core to the product management function Skills And Qualifications We Are Looking For 3+ years of product management experience working on SaaS offering, preferably with Enterprise CRM or ERP Products 1+ years Development experience is an added advantage Proven experience in building B2B SAAS applications and API development Excellent presentation skills, including strong oral and written capabilities; ability to clearly communicate compelling messages to internal and external audiences Experience operating in a hyper-growth / entrepreneurial environment and scaling new and emerging products from the ground up Company Perks & Benefits 💰Competitive salary, equity, and performance bonus 🏖 Unlimited vacation 🤕 Paid sick leave 💻 Latest MacBook 🏥Medical insurance 🏋️ Monthly Wellness Stipend 🍼 Paid maternity and paternity leave Why join us? 💡 We created the fast-growing Ecosystem Business Management category 🚀 We're growing rapidly, and the sky’s the limit - we just raised a Series C to help us expand 🦄 We've built an extremely efficient go-to-market engine 🥇 Work with a talented team you'll learn a lot from 🙏 We care about delivering value to our awesome customers 🗣️ We are flexible in our opinions and always open to new ideas. 💡 We innovate continuously, with a focus on long-term success 🌍 We know it takes people with different ideas, strengths, backgrounds, cultures, weaknesses, opinions, and interests to make our company succeed. We celebrate our differences and are lucky to have teammates worldwide. 🤝 Buddy system: It's dangerous to go alone, so we got you a buddy 🙌. In some realms, they use the term mentor, but we don't think that is a good description. Your buddy will be mentoring you, but he/she will also be your friend and your first point of contact during the onboarding period. Other Cool Things About WorkSpan ❓ What is WorkSpan? https://www.workspan.com/what-is-workspan/ 💙 Our values : https://www.workspan.com/careers/ 🔊 Videos of events and customer speakers: https://www.youtube.com/c/WorkSpan/videos 🆕 Latest updates from WorkSpan : https://www.linkedin.com/company/workspan/posts/ WorkSpan ensures equal employment opportunity without discrimination or harassment based on race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, veteran status, or any other characteristic protected by law. Show more Show less
Posted 6 days ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends Basic Qualifications What we expect of you Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, we believe success is driven by passion and purpose. Our passion for technology is only surpassed by our commitment to empowering our employees around the world . The Opportunity: ECI has an exciting Opportunity for Cloud Data Engineer. The full time position is open for an experienced Sr DataEngineer that will support several of our clients systems. Client satisfaction is our primary objective; all available positions are customer facing requiring EXCELLENT communication and people skills. A positive attitude, rigorous work habits and professionalism in the work place are a must. Fluency in English, both written and verbal are required. This is an Onsite role. What you will do: A senior cloud data engineer with 7+ years of experience Strong knowledge and hands on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache spark and Snowflake Datawarehouse, Azure Fabric Good to have Azure Databricks, Azure Cosmos DB, etc, Azure AI Must have experience in developing could base application. Should be able to analyze problem and provide solution. Experience in designing, implementing, and managing data warehouse solutions using Azure Synapse Analytics or similar technologies. Experience in migrating the data from On-Premises to Cloud. Proficiency in data modeling techniques and experience in designing and implementing complex data models. Experience in designing and developing ETL/ELT processes to move data between systems and transform data for analytics. Strong programming skills in languages such as SQL, Python, or Scala, with experience in developing and maintaining data pipelines. Experience in at least one of the reporting tools such as Power BI / Tableau Ability to work effectively in a team environment and communicate complex technical concepts to non-technical stakeholders. Experience in managing and optimizing databases, including performance tuning, troubleshooting, and capacity planning Understand business requirements and convert them to technical design for implementation. Understand business requirement, perform analysis and develop and test code. Design and develop could base application using Python on serverless framework Strong communication, analytical, and troubleshoot skills Create, maintain and enhance applications Work independently as individual contributor with minimum or no help. Follow Agile Methodology (SCRUM Who you are: Experience in developing could base data application. Hands on in Azure data services, data warehousing, ETL etc Understanding of cloud architecture principles and best practices, including scalability, high availability, disaster recovery, and cost optimization, with a focus on designing data solutions for the cloud. Experience in developing pipelines using ADF, Synapse. Hands on experience in migrating data from On_premises to cloud. Strong experience in writing the complex SQL Scripts, transformations. Able to analyze problem and provide solution. Knolwedge in CI/CD pipelines is plus. Knowledge in Python and API Gateway is an added advantage Bonus (Nice to have): Product Management/BA experience nice to have. ECI’s culture is all about connection - connection with our clients, our technology and most importantly with each other. In addition to working with an amazing team around the world, ECI also offers a competitive compensation package and so much more! If you believe you would be a great fit and are ready for your best job ever, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here! Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: • Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. • Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. • Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. • Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. • Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. • Lead by example in object-oriented development, particularly using Scala and Java. • Translate complex requirements into clear, actionable technical tasks for the team. • Contribute to the development of ETL processes for integrating data from various sources. • Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: • 8+ years of professional experience in Big Data development and engineering. • Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. • Solid object-oriented development experience with Scala and Java. • Strong SQL skills with experience working with large data sets. • Practical experience designing, installing, configuring, and supporting Big Data clusters. • Deep understanding of ETL processes and data integration strategies. • Proven experience mentoring or supporting junior engineers in a team setting. • Strong problem-solving, troubleshooting, and analytical skills. • Excellent communication and interpersonal skills. Preferred Qualifications: • Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). • Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). • Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities. Show more Show less
Posted 6 days ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Your Team Responsibilities MSCI has an immediate opening in of our fastest growing product lines. As a Lead Architect within Sustainability and Climate, you are an integral part of a team that works to develop high-quality architecture solutions for various software applications on modern cloud-based technologies. As a core technical contributor, you are responsible for conducting critical architecture solutions across multiple technical areas to support project goals. The systems under your responsibility will be amongst the most mission critical systems of MSCI. They require strong technology expertise and a strong sense of enterprise system design, state-of-the-art scalability and reliability but also innovation. Your ability to take technology decisions in a consistent framework to support the growth of our company and products, lead the various software implementations in close partnerships with global leaders and multiple product organizations and drive the technology innovations will be the key measures of your success in our dynamic and rapidly growing environment. At MSCI, you will be operating in a culture where we value merit and track record. You will own the full life-cycle of the technology services and provide management, technical and people leadership in the design, development, quality assurance and maintenance of our production systems, making sure we continue to scale our great franchise. Your Skills And Experience That Will Help You Excel Prior senior Software Architecture roles Demonstrate proficiency in programming languages such as Python/Java/Scala and knowledge of SQL and NoSQL databases. Drive the development of conceptual, logical, and physical data models aligned with business requirements. Lead the implementation and optimization of data technologies, including Apache Spark. Experience with one of the table formats, such as Delta, Iceberg. Strong hands-on experience in data architecture, database design, and data modeling. Proven experience as a Data Platform Architect or in a similar role, with expertise in Airflow, Databricks, Snowflake, Collibra, and Dremio. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Ability to dive into details, hands on technologist with strong core computer science fundamentals. Strong preference for financial services experience Proven leadership of large-scale distributed software teams that have delivered great products on deadline Experience in a modern iterative software development methodology Experience with globally distributed teams and business partners Experience in building and maintaining applications that are mission critical for customers M.S. in Computer Science, Management Information Systems or related engineering field 15+ years of software engineering experience Demonstrated consensus builder and collegial peer About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
10.0 - 13.0 years
8 - 17 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Detailed job description - Skill Set: Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks. Mandatory Skills AWS, Databricks
Posted 6 days ago
6.0 - 10.0 years
12 - 15 Lacs
Chennai, Coimbatore, Mumbai (All Areas)
Work from Office
We have an urgent requirement for Role: (Senior Azure Data Engineer) Experience: 6 years. Notice Period: 0-15 days Max Position: C2H Should be able to work in Flexible timing. Communication should be excellent. Must Have: Strong understanding of ADF, Azure, Databricks, PySpark, Strong understanding of SQL, ADO, PowerBI, Unity Catalog is mandatory
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role: As a Director in Software Engineering, you will provide comprehensive leadership to senior managers and high-level professionals. You will have primary responsibility for the performance and results within your area, ensuring that all software engineering activities align with business strategies. Your role is crucial for steering the direction of major projects and technological advancements that will drive the company forward. Responsibilities: Provide strategic leadership and direction for the product software engineering department, aligning it with overall business objectives in the context of a matrixed organization. Communicate effectively in a matrixed organization with senior management, peers, and subordinates to ensure alignment and collaboration. Develop and define departmental objectives, strategies, and goals to drive the success of software projects. Establish and maintain positive interpersonal relationships within the department and with other stakeholders. Stay updated with relevant knowledge, technologies, and best practices to drive innovation within the department. Ensure compliance with quality standards and best practices in software development. Make critical decisions and solve complex problems related to software development and team management. Develop and build high-performing teams of software engineers, fostering their growth and productivity. Organize, plan, and prioritize the department's work to ensure efficient use of resources and timely project delivery. Utilize data analysis and information to drive data-driven decisions and measure the success of software products. Monitor development processes, framework adoptions, and project surroundings, optimizing efficiency and adherence to standards. Provide coaching and mentorship to team members, fostering their professional growth and development. Provide guidance and direction to subordinates, ensuring they align with the department's vision. Monitor ongoing processes, materials, or surroundings, providing feedback for continuous improvement. Evaluate information and software products to ensure compliance with industry standards. Skills: DevOps: An ability to use systems and processes to coordinate between development and operations teams in order to improve and speed up software development processes. This includes automation, continuous delivery, agility, and rapid response to feedback. Product Software Engineering: The ability to design, develop, test, and deploy software products. It involves understanding user needs, defining functional specifications, designing system architecture, coding, debugging, and ensuring product quality. It also requires knowledge of various programming languages, tools and methodologies, and ability to work within diverse teams and manage projects. Cloud Computing: The ability to utilize and manage applications, data, and services on the internet rather than on a personal computer or local server. This skill involves understanding various cloud services (like AWS, Google Cloud, Azure), managing resources online, and setting up cloud-based platforms for business environment. Implementation and Delivery: This is a skill that pertains to the ability to translate plans and designs into action. It involves executing strategies effectively, overseeing the delivery of projects or services, and ensuring they are completed in a timely and efficient manner. It also necessitates the coordination of various tasks and management of resources to achieve the set objectives. Problem Solving: The ability to understand a complex situation or issue and devise a solution by defining the problem, identifying potential strategies, and ultimately choosing and implementing the most effective course of action. People management: The ability to lead, motivate, engage and communicate effectively with a team. This includes skills in delegation, conflict resolution, negotiation, and understanding team dynamics. It also involves building a strong team culture and managing individual performance. Agile: The ability to swiftly and effectively respond to changes, with an emphasis on continuous improvement and flexibility. In the context of project management, it denotes a methodology that promotes adaptive planning and encourages rapid and flexible responses to changes. APIs: The ability to design, develop, and manage Application Programming Interfaces, which constitute the set of protocols and tools used for building application software. This skill includes the capacity to create and maintain high-quality API documentation, implement API security practices, and understand API testing techniques. Additionally, having this ability means understanding how APIs enable interaction between different software systems, allowing them to communicate with each other. Analysis: The ability to examine complex situations or problems, break them down into smaller parts, and understand how these parts work together. Automation: The ability to design, implement, manage, and optimize automated systems or processes, often using various software tools and technologies. This skill includes understanding both the technical elements and the business implications of automated systems. Frameworks: The ability to understand, utilise, and create structured environments for software development. This skill also involves being able to leverage existing frameworks to streamline processes, ensuring better efficiency and code manageability in software development projects. Financial Budget management: The ability to plan, coordinate, control, and execute financial resources over a certain period, and make decisions on distribution of resources efficiently and effectively. This includes estimating revenues, costs and expenses, and ensuring they align with the set goals or targets. Application Security: The ability to protect applications from threats and attacks by identifying, fixing, and preventing security vulnerabilities. This skill involves the use of software methods and systems to protect applications against security threats. Architectural patterns: The ability to understand, analyze, and apply predefined design solutions to structural problems in architecture and software development. This skill involves applying proven patterns to resolve complex design challenges and create efficient and scalable structures, maintaining balance between functional requirements and aesthetic appeal. Competencies: Judgement & Decision Making Accountability Inclusive Collaboration Inspiration & Alignment Courage to Take Smart Risks Financial Acumen Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 6 days ago
1.0 - 6.0 years
3 - 8 Lacs
Hyderabad
Work from Office
Role Description: The role is responsible for designing, developing, and maintaining software solutions for Research scientists . Additionally, it involves automating operations, monitoring system health, and responding to incidents to minimize downtime. Y ou will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and report ing for our Large Molecule Discovery Research team (Design, Make, Test and Analyze processes) . The team also interfaces heavily with teams supporting our in vi tro assay management systems and our compound inventory platforms . The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing ). Roles & Responsibilities: Work closely with product team, business team including scientists, and other stakeholders Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Conduct code reviews to ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Stay updated with the latest technology and security trends and advancements Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7 - 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 1+ years of experience in implementing and supporting biopharma scientific software platforms Functional Skills: Must-Have Skills : Proficient in Java or Python Proficient in at least one JavaScript UI Framework (e.g.ExtJS, React, or Angular) Proficient in SQL (e.g. Oracle, PostgreSQL, Databricks) Good-to-Have Skills: Experience with event-based architecture and serverless AWS services such as EventBridge, SQS, Lambda or ECS. Experience with Benchling Hands-on experience with Full Stack software development Strong understanding of software development methodologies, mainly Agile and Scrum Working experience with DevOps practices and CI/CD pipelines Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experiencewith automated testing tools and frameworks Experience with big data technologies (e.g., Spark, Databricks, Kafka) Experience with leveraging the use of AI-assistants (e.g. GitHub Copilot) to accelerate software development and improve code quality Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills.
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less
Posted 6 days ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Qualification & Experience: Minimum of 8 years of experience as a Data Scientist/Engineer with demonstrated expertise in data engineering and cloud computing technologies. Technical Responsibilities Excellent proficiency in Python, with a strong focus on developing advanced skills. Extensive exposure to NLP and image processing concepts. Proficient in version control systems like Git. In-depth understanding of Azure deployments. Expertise in OCR, ML model training, and transfer learning. Experience working with unstructured data formats such as PDFs, DOCX, and images. O Strong familiarity with data science best practices and the ML lifecycle. Strong experience with data pipeline development, ETL processes, and data engineering tools such as Apache Airflow, PySpark, or Databricks. Familiarity with cloud computing platforms like Azure, AWS, or GCP, including services like Azure Data Factory, S3, Lambda, and BigQuery. Tool Exposure: Advanced understanding and hands-on experience with Git, Azure, Python, R programming and data engineering tools such as Snowflake, Databricks, or PySpark. Data mining, cleaning and engineering: Leading the identification and merging of relevant data sources, ensuring data quality, and resolving data inconsistencies. Cloud Solutions Architecture: Designing and deploying scalable data engineering workflows on cloud platforms such as Azure, AWS, or GCP. Data Analysis : Executing complex analyses against business requirements using appropriate tools and technologies. Software Development : Leading the development of reusable, version-controlled code under minimal supervision. Big Data Processing : Developing solutions to handle large-scale data processing using tools like Hadoop, Spark, or Databricks. Principal Duties & Key Responsibilities: Leading data extraction from multiple sources, including PDFs, images, databases, and APIs. Driving optical character recognition (OCR) processes to digitize data from images. Applying advanced natural language processing (NLP) techniques to understand complex data. Developing and implementing highly accurate statistical models and data engineering pipelines to support critical business decisions and continuously monitor their performance. Designing and managing scalable cloud-based data architectures using Azure, AWS, or GCP services. Collaborating closely with business domain experts to identify and drive key business value drivers. Documenting model design choices, algorithm selection processes, and dependencies. Effectively collaborating in cross-functional teams within the CoE and across the organization. Proactively seeking opportunities to contribute beyond assigned tasks. Required Competencies: Exceptional communication and interpersonal skills. Proficiency in Microsoft Office 365 applications. Ability to work independently, demonstrate initiative, and provide strategic guidance. Strong networking, communication, and people skills. Outstanding organizational skills with the ability to work independently and as part of a team. Excellent technical writing skills. Effective problem-solving abilities. Flexibility and adaptability to work flexible hours as required. Key competencies / Values: Client Focus : Tailoring skills and understanding client needs to deliver exceptional results. Excellence : Striving for excellence defined by clients, delivering high-quality work. Trust : Building and retaining trust with clients, colleagues, and partners. Teamwork : Collaborating effectively to achieve collective success. Responsibility : Taking ownership of performance and safety, ensuring accountability. People : Creating an inclusive environment that fosters individual growth and development.
Posted 6 days ago
12.0 - 14.0 years
14 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is dedicated to connecting top talent with forward-thinking companies. Our mission is to provide exceptional talent acquisition services while fostering a culture of trust, integrity, and collaboration. We prioritize our clients' needs and work tirelessly to ensure the ideal candidate-job match. Join us in our commitment to excellence and become part of a dynamic team focused on driving success for individuals and organizations alike. Role Responsibilities Design, develop, and implement data pipelines using Azure Data Factory. Create and maintain data models for structured and unstructured data. Extract, transform, and load (ETL) data from various sources into data warehouses. Develop analytical solutions and dashboards using Azure Databricks. Perform data integration and migration tasks with Azure tools. Ensure optimal performance and scalability of data solutions. Collaborate with cross-functional teams to understand data requirements. Utilize SQL Server for database management and data queries. Implement data quality checks and ensure data integrity. Work on data governance and compliance initiatives. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data processes and architecture for future reference. Stay current with industry trends and Azure advancements. Train and mentor junior data engineers and team members. Participate in design reviews and provide feedback for process improvements. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in a data engineering role. Strong expertise in Azure Data Factory and Azure Databricks. Proficient in SQL for data manipulation and querying. Experience with data warehousing concepts and practices. Familiarity with ETL tools and processes. Knowledge of Python or other programming languages for data processing. Ability to design scalable cloud architecture. Experience with data modeling and database design. Effective communication and collaboration skills. Strong analytical and problem-solving abilities. Familiarity with performance tuning and optimization techniques. Knowledge of data visualization tools is a plus. Experience with Agile methodologies. Ability to work independently and manage multiple tasks. Willingness to learn and adapt to new technologies. Skills: etl,azure databricks,sql server,azure,data governance,azure data factory,python,data warehousing,data engineer,data integration,performance tuning,python scripting,sql,data modeling,data migration,data visualization,analytical solutions,pyspark,agile methodologies,data quality checks Show more Show less
Posted 6 days ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Hyderabad, Telangana | Full Time Apply Now Neudesic is currently seeking Senior Data Scientists. This role requires the perfect mix of being a brilliant technologist and also a deep appreciation for how technology drives business value. You will have broad and deep technology knowledge and the ability to architect solutions by mapping client business problems to end-to-end technology solutions. Demonstrated ability to engage senior level technology decision makers in data management, real-time analytics, predictive analytics and data visualization is a must. To be successful, you must exhibit strong leadership qualities necessary for building trust with clients and our technologists, with the ability to deliver ML/DL projects to successful completion. You will partner with solution architects to drive client success by providing practical guidance based on your years of experience in data management and visualization solutions. You will partner with a diverse sales unit to professionally represent Neudesic experience and ability to drive business results. In addition, you will assist in creating sales assets that clearly communicate our value proposition to technical decision makers. Experience: 6+yrs Primary Skills: Python. SQL,ML,NLP,Data models,Data Insights Strong mathematical skills to help collect, measure, organize and analyze data. Knowledge of programming languages like SQL and Python Technical proficiency regarding database design development, data models, techniques for data mining, and segmentation. Proficiency in statistics and statistical packages like Excel to be used for data set analyzing. Knowledge of data visualization software like PowerBI is desirable. Knowledge of how to create and apply the most accurate algorithms to datasets in order to find solutions. Problem-solving skills Adept at queries, writing reports, and making presentations. Team-working skills Verbal and Written communication skills Proven working experience in data analysis. Job Description 2 Expertise on a broad set of ML approaches and techniques, ranging from Artificial Neural Networks to Bayesian Non-Parametric methods, model preparation and selection (feature engineering, PCA, model assessment), and modeling techniques (optimization, simulation) Proficiency in data analysis, modeling, and web services in Python. GPU programming experience. Natural Language Processing experience – Ontology detection/Named Entity recognition and disambiguation and Predictive Analytics experience a plus. Familiarity with existing ML stack (Azure Machine Learning, scikit-learn, Tensorflow, Keras and others) SQL/NoSQL experience Experience with Apache spark with Databricks or similar platform for crunching massive amount of data Experience in leveraging AI in the CX (Customer Experience) domain a plus: Service (Topic Analysis, Aspect based sentiment analysis, NLP in Service context), Pre-Sales (Segmentation and Propensity Models) as well as Customer Success (Sentiment analysis, Best-Agent to Route, Churn Prediction, Customer Health Score, Recommender Systems for Next Best Action) using Machine Learning and Data Science for Recurring Revenue based business models Software Development Skills Experience in SQL and development experience in at least one scripting language (Python, Perl, etc.), and one high level programming language (Java) Experience in containerized applications on cloud (Azure Kubernetes Service), cloud databases (Azure SQL), and data storage (Azure Data Lake Storage, File storage) More About Our Predictive Enterprise Service Line The digital business uses data as a competitive differentiator. The explosion of big data, machine learning and cloud computing power creates an opportunity to make a quantum leap forward in business understanding and customer engagement. The availability of massive amounts of information, massive computing power and advancements in artificial intelligence allow the digital business to more accurately predict, plan for and capture opportunity unlike ever before. The predictive enterprise service line the evolution from using data strictly as a reporting mechanism of what’s happened to leveraging the latest in advanced analytics to predict and prescribe future business action. Our services include: Data Management Solutions: We build architectures, policies, practices and procedures that manage the full data lifecycle of an enterprise. We bring internal and exogenous datasets together to formulate new perspectives and drive to data-thinking. Self-Service Data Solutions: We create classic self-service and modern data-blending solutions that enable end-users to enrich pre-authored analytic reports by blending them with additional data sources. Real-Time Analytic Solutions: We build real-time analytics solutions on data-in-motion that eliminate the dependency on stale and static data sets resulting in the ability to immediately query and analyze diverse data sets. Machine Learning Solutions: We build machine-learning solutions that support the most complex decision support systems Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Submit Your Application You have successfully applied You have errors in applying Apply With Resume * First Name* Middle Name Last Name* Email* Mobile Phone Candidate Location--Choose-- Bengaluru Hyderabad Kochi Other Social Network and Web Links Provide us with links to see some of your work (Git/ Dribble/ Behance/ Pinterest/ Blog/ Medium) Cover Letter Attach a file < 2 MB Show more Show less
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.